More
referral
Increase your income with Hive. Invite your friends and earn real cryptocurrency!

Locked core clock speed is much better than power-limit, why is not included by default?

have you tried to set the clock manually for each 3060ti by pressing on the “Run command” button? image .

I set the clock on 915Mhz for each of them with “nvidia-smi -i 0 -lgc 915” where the number after “-1” corresponds with gpu number. I set the PL to 126Watt. I get the exact same hashrate as the 3070

3 Likes

You know man I did not try to set those individually. I did read that in your post. I appreciate your follow-up it’s more clear now to do that within the browser. I just set the 3070 from the OC table and it worked fine.

I made the core clock changes on my rtx 3070s, and it looks good. The rtx 3060 ti’s dont like that change for some reason, so reverted back to -500/2600.

Am I running my cards too agressively? Do I need to back down the memory clock?

Here is my current config.:

1 Like

OK. It is good to limit the core but it is even better if we can limit the voltage at a certain core. For instance, I would like to lock in 1550 Mhz at 700mV for my 3070:s. Is this something you are working on? This would allow the users to use tweak the performance for different mining activities while maintaining the lowest possible power draw.

As far as i know this cannot be done under linux because of nvidia driver/api doesnt have that opcion (yes it does on windows).

3060ti is unhappy because its sweet spot is much higher. Try to set 1420 (+/-15) and it will be happy. My 2 FEs are happy with 1425:
Nvidia RTX 3060 Ti 8GB 1425/8100 62.01 MH/s 120 W 43°C / 65%
Also it seems that value you set via nvidia-smi is dependent on value of core clock set in OS. That is why j2h4u has his cards happy on 900 (he has -200 core on screenshot). Actually his cards would be as much or even more happy on 885.
For my 3070s I set -400 for core clock in OS and then set 690 in nvidia-smi and it is closely the same as set 0 core clock and 1080 in nvidi-smi (that value changes in increments of 15 that’s why it is not exactly 400).
But looks like using this trick helps to save couple of watts and couple of celsius.
I’m still experimenting with diferen modes and values. There is my rig. The first card has core clock -400, others have 0:
Screenshot 2021-04-08 010451

I’m so grateful to both of you guys for sharing this and I’m so happy that I found this thread. My rig now so much stable and power efficiency just skyrocketed.

1 Like

For some reason my the efficiency is a little lower when is set the 1410mhz instead of the manual lock…

1410mhz core settings

nvidia-smi -lgc 915 settings

This may be a stupid questoin, but I am a bit confused:
For the 3070s > do we setup 1075 in the HiveOS GUI and then run the nvidia-smi -i 0 -lgc 1075 in the run command?
I was doing it all from the Web GUI OC window, but now that I am reading your pasts again, it look slike I need to adjust in both places.
Thanks much!

Its one or another. If you have the latest stable version you can go with the GUI otherwise you should try the command line

I am using the latest HiveOS with latest Nvidia driver: N460.67 and uzsed the GUI to set the values, but wanted to make sure I did not have to use the run command first and then put something in the GUI.

Sounds like if I just use the GUI with the 1075, I should be OK.

Thank you for your input!

you may be good.

just in case, that’s the latest version I’ve mentioned

I think I am on the latest non-beta version:

0.6-203@210403

A 20.40

N 460.67

It works for the 3070s as well. Setting the core clock to -400 and manually locking the GPU to 690 saves a few watts and improves efficiency over the setting 1080 in the GUI.
Here 3070s have -400 core clock and 3060tis have -500 in the GUI and manually locked to 690 and 900 respectively:
gui trex

mxm - are you saying that I need to:

For rtx 3070s:

  1. Go to the GUI and set -400 / 2550
  2. Go to the RUN command and input nvidia-smi -i 0 -lgc 690 (where 0 is GPU 0 in the GUI)

For rtx 3060tis:

  1. Go to the GUI and set -500 / 2550
  2. Go to the RUN command and input nvidia-smi -i 0 -lgc 900 (where 0 is GPU 0 in the GUI)

Thank you!

Yes. That’s what I did.
There is nothing that stops you from trying all variants and choose the best.
Also, try to play with values a bit. Change them up and down in increments of 15 and find the value with the best performance. Provided values are the best for my cards, but they may not be the best for yours. Try to fine tune parameters for your cards.

Also, you can set the same value for multiple GPUs in a single command, like
nvidia-smi -i 0,1,3 -lgc 900
But you can’t set different values for different GPUs in a single command.

Thats is very helpful, but what confuses me is that after I run the command my GUI still shows the -500 / 2500 and the watts usage does not change (it pulls as much as the limit is set)

Wouldn’t the GUI change and disaply the manual locked values afte rthe comand executes?

no, the GUI doesn’t show what you set manually.

Try to stop the miner before you star the sequence, like:
For rtx 3070s:

1.Stop the miner
2.Go to the GUI and set -400 / 2550
3.Go to the RUN command and input nvidia-smi -i 0 -lgc 690 (where 0 is GPU 0 in the GUI)
4.Restart the miner

This may solve the watts usage issue because the OC change will happen with less load

It doesn’t look like the manual command is taking because when I restart the miner I see the regular power limits. Is there a command line to use to verify that it took the settings?

Was the command accepted? It should look like this:

miner@simpleminer:~$ sudo nvidia-smi -i 3 -lgc 690
GPU clocks set to “(gpuClkMin 690, gpuClkMax 690)” for GPU 00000000:28:00.0

Make sure you run it with sudo if you are not logged as root.
Also manual setting survives through miner restart, but will be lost after rig reboot.

You can run nvidia-smi without parameters to get current state.
Also you can get all available info about GPU using

nvidia-smi -q -i 0

Where -i - GPU index.

Thanks much! I was just running it from the run command prompt, not the console…