Question GIGABYTE 5700 XT Bios mod fails

AlleyCat

Veteran
Mitglied seit
Okt 9, 2020
Beiträge
290
Bewertungspunkte
17
Punkte
17
Hi,
I am failing to update Gigabyte 5700 XT. The same procedure updates the BIOS on MSI cards with no problem. I follow the instructions on Igors Lab.

The sign of having trouble with the card bios flash shows in GPU-Z. After flash using amdvbflash the values of GPU and memory frequencies is empty. With stock bios there are Mhz frequencies.

Is there any known problems with flashing Gigabyte cards?
Any suggestions on what other forums I may ask for assistance?
Is it possible that the OEM bios is signed, and any modified bios will be rejected?
If the bios is signed, any tools to resign, or would I need to buy a card from a different vendor?

Thanks,

Alley Cat
 
Zuletzt bearbeitet :
I test the bios, don´t work, the GPU is bricked ! :(
Thank you very much for the update,

It seems that more time is needed to figure out the type of authentication was used here,

I believe because of this authentication, it was not possible to make RedBiosEditior support RX 5500 series as I believe all the RX 5500 series are protected however they did better job this time than the RX 5700 as in RX 5500 it will never allow to flash a vbios with authentication mismatch to prevent the card from being brickrd,

I am trying my best to figure out the type of authentication used here, I will update if I find something out whenever it is possible.

I test the bios, don´t work, the GPU is bricked ! :(
There is still one way around to overcome this, I will send you the vbios after I finish whenever it is possible.
 
I must confess; it is too tempting to try to tweak for best performance. No wonder it is called "mining" because we all have a crypto gold rush.

I needed the evidence to be convinced, and I was blown away to see the difference between HiveOS numbers and reality.

Thank you for being along with everyone, even if sometimes people ignore your recommendations. I stuck with the ratio 1.8 (sometimes I did 1.82 in secret), but now I am going to re-test the rig with the power meter.
I moved from hiveos to raveos for this reason raveos can better control soc
And I’m in China, all motherboards have no lifter

I use this
 

Anhänge

  • 1.jpg
    1.jpg
    845,5 KB · Aufrufe : 71
  • 2.jpg
    2.jpg
    70,6 KB · Aufrufe : 49
  • 3.jpg
    3.jpg
    102,3 KB · Aufrufe : 71
Hello guys, i installed today HiveOS application on my phone, and I saw that it has 2 more options in oc settings like clock state and mem state than the desktop version. Has anyone tried? The MEM State option seems interesting, we haven't tried anything yet, maybe someone more experienced will tell us what these options refer to
 

Anhänge

  • IMG_4314.jpg
    IMG_4314.jpg
    337,7 KB · Aufrufe : 70
  • IMG_4313.PNG
    IMG_4313.PNG
    446,1 KB · Aufrufe : 57
  • IMG_4315.PNG
    IMG_4315.PNG
    608,9 KB · Aufrufe : 90
Hello guys, i installed today HiveOS application on my phone, and I saw that it has 2 more options in oc settings like clock state and mem state than the desktop version. Has anyone tried? The MEM State option seems interesting, we haven't tried anything yet, maybe someone more experienced will tell us what these options refer to
The HiveOS team said that these options are not for Navi cards. I tried to modify the modes, but it made no difference.
 
Hello guys, i installed today HiveOS application on my phone, and I saw that it has 2 more options in oc settings like clock state and mem state than the desktop version. Has anyone tried? The MEM State option seems interesting, we haven't tried anything yet, maybe someone more experienced will tell us what these options refer to
Hi,

Mem state here is not needed as we already set memory timing controller at 1 in morepowertool,

core state is an existed profiles for dynamic power management, if you manually set the parameters then this is not needed as well.
 
Zuletzt bearbeitet :
Hello everyone! Great discussion! There is something that bugs me with my MSI Mech 5700 xt. In windows I set core to 1280Mhz and in linux I have to set it to 1400Mhz to get 56Mh/s. Same miner software. Of course linux draws more power. So why linux needs higher frequencies? Thanks.
 
Hello guys, i installed today HiveOS application on my phone, and I saw that it has 2 more options in oc settings like clock state and mem state than the desktop version. Has anyone tried? The MEM State option seems interesting, we haven't tried anything yet, maybe someone more experienced will tell us what these options refer to
What version of HiveOs do you run?
Those options shouldn't even be there for Navi cards 🤷

Edit: Hmmm i have not installed the APP maybe its there where it shows, those options are for Polaris AFAIK
 
Zuletzt bearbeitet :
Hello everyone! Great discussion! There is something that bugs me with my MSI Mech 5700 xt. In windows I set core to 1280Mhz and in linux I have to set it to 1400Mhz to get 56Mh/s. Same miner software. Of course linux draws more power. So why linux needs higher frequencies? Thanks.
Hello,

I believe because different adrenaline drivers and different kernels as well, please check the power consumption at the wall when running in Windows and Linux then compare, the less power at the same hash rateis the better choice,
In Linux you can try different OS and different Miner, like RaveOS, HiveOS and Hasher8OS, for miner TeamRedMiner.
 
Is this temperature on msi card normal? with the fan close to maximum. The room temperature is around 5-10 degrees Celsius
 

Anhänge

  • 1.png
    1.png
    30,9 KB · Aufrufe : 97
MSI Mech needs better thermal pads for the vram memory pieces and controller, replacing the thermal paste with better one as well is a plus.
Or just dont buy it? It is a well known fact MSI Mechs have bad temps...doing the above voids the warranty.

RMA it and get a different card...if you dont care about that, i'd second Mini_Me....you should have a minimum handle on electronics to do that imho
 
It is not the card, it is the PhoenixMiner, change to the TeamRedMiner and you will see the difference, you can give it a try to see the difference by yourself.
actually (atleast in my experience) its the pool. You should use sonar to ping all the pools and choose pool which has lowest ping. By default I was using hiveON pool but always wondered if its normal to have 5% on rejected shares. Now I changed to another pool with 23ms ping and my rejected shares dropped to 1%

...and in regards to Mini_Me's advice; yes you definitely should change phoenix to team red miner. TRM mining is ~1Mhash less per gpu (at least my 5700) but power usage drops at same rate. TRM seems to be very good option.
 
Zuletzt bearbeitet von einem Moderator :
To the contrary to previous posts I stand by MSI RX 5700XT MECH OC. Following excellent work and guides collated in this forum's thread I modded number of GPUs form different manufactures from A to Z. Frankly, the MSI Mech was a clear winner, at least to me. Perhaps this may be due to fact that I'm not using Hiveon pool as latency is for me varies between 2000~3999ms whilst Ethermine is a brick solid 14~16ms.
Although I'm using my Mechs with SMOS, the same results are reproducible in HIVEOS. First two are with Samsung GDDR, the last one uses Micron.
Aim in my modding is best hash to power consumption ratio.
 

Anhänge

  • Screenshot from 2020-11-20 09-21-19.png
    Screenshot from 2020-11-20 09-21-19.png
    50,1 KB · Aufrufe : 92
  • Screenshot from 2020-11-20 15-15-17.png
    Screenshot from 2020-11-20 15-15-17.png
    47,3 KB · Aufrufe : 75
  • Screenshot from 2020-11-22 09-29-28.png
    Screenshot from 2020-11-22 09-29-28.png
    47,4 KB · Aufrufe : 73
I started testing power at the wall, and there are significant differences between HiveOS and the wall.
The Nitro+ reports 94W in HiveOS, and when I removed it from the system, the power at the wall dropped by 148W.

I can't figure this out. Here are a couple of assumptions:
1) Perhaps the GPU & Riser take actually 148W
2) The CPU is adjusting and reducing power when I remove a GPU from the system. I removed all connectors and cables, including the riser power.

The test system configurations:
2 x 5700XT GPUs, 1 x XFX III, and 1 x Nitro+

Total System power at the wall, with both cards: 324W
Remove the Nitro+, total power on the wall: 172W (Nitro+ = 324-172=152W (HiveOS reports 94W)
Return the Nitro+, and remove the XFX, the total power on the wall: 174W (XFX = 324-174 = 150W (HiveOS reports 117W)

Based on this initial test I realize that there is no value to the power numbers reported by HiveOS. I suspected that there may be some inaccuracy, perhaps due to the quality of different temp sensors, but I never guessed the differences are that large.

By the way, the system idle without cards runs at 20W at the wall.
You dont have to speculate any of this, just buy ammeter. it costs only ~40$. it shows the real current drawn of card (P = U*I = 12V * measured value)

1606039339520.png

when you are comparing different scenarios and measure from wall, efficiency curve of PSU distorts this value.

I am going to write something that contradicts everything that I have wrote so far.

This report is based on a test with only one GPU 5700XT Nitro+. I will use these new assumptions with a rig of 9 cards.

I check the power consumption at the wall with HiveOS settings reported from 94W to 118W. The power difference at the wall is less than 5W (166-170W). At 94W in HiveOS, the power at the wall is 166W, and Hash at 55.9MH/s. At 118W in HiveOS, the power at the wall is 170W, and the hash is 57.9. With a gain of 2MH/s for a negligent increase of 6W, which is 3.5% of total power, I am getting an increase of 3.5% in Hash.

My conclusion, the run the rig at the MAXIMUM Hash, as long as the rig is stable. All the tweaking of VDDCO, MVDD etc. have little or NO impact on the power at the WALL.

The most significant variable on the power is VDD. I would try a new strategy that targets the best VDD/Core combination, with the highest memory clock. The purpose is to get the highest hash, which pays, with little increments of power AT THE WALL. There is a ratio of increments has to increased power, and I am looking for every 1MH, and increase on no more than 2-3 at the wall.

What takes lots of power are the fans 3 - 5 watts each. I will start focusing on reducing the temperatures so the fans can run slow. I will also look at replacing the risers, perhaps better risers reduce the overall power.
YES and NO.

You should do extensive testing and write results to excel. My power measurements are very consistent with overclock / soc value changes and I dont trust any OS reading at all anymore.

1) decide your default values (mini_me's recommended values for example)
2) enter those OC values to hive os
3) reboot and wait ~100 shares to system to stabilize
4) measure current with ammeter (put ALL +12V wires of one GPU inside ammeter clamp including riser wires) and take average of ~5s
5) write results to excel
6) change some values and restart from 2)

report your findings to here.

also please report do you also measure ~39W power usage from riser?

Is this temperature on msi card normal? with the fan close to maximum. The room temperature is around 5-10 degrees Celsius
not normal.

My MSI 5700 Gaming X has these temps (room temperature now ~5'C)

1606038922233.png
 
Zuletzt bearbeitet von einem Moderator :
actually (atleast in my experience) its the pool. You should use sonar to ping all the pools and choose pool which has lowest ping. By default I was using hiveON pool but always wondered if its normal to have 5% on rejected shares. Now I changed to another pool with 23ms ping and my rejected shares dropped to 1%

...and in regards to Mini_Me's advice; yes you definitely should ghange phoenix to team red miner. TRM mining is ~1Mhash less per gpu (at least my 5700) but power usage drops at same rate. TRM seems to be very good option.
Hi Calathia,

Actually it was AlleyCat who pointed out the difference between TeamRedMiner and the latest version of PhoenixMiner as I did not use the newer version of PhoenixMiner which is really different from the previous versions like v4.6c especially regarding its kernel and I did not realize this earlier,

TeamRedMiner is designed especially for AMD and I do recommend it as I believe many others as well when using AMD cards for mining.

You dont have to speculate any of this, just buy ammeter. it costs only ~40$. it shows the real current drawn of card (P = U*I = 12V * measured value)

when you are comparing different scenarios and measure from wall, efficiency curve of PSU distorts this value.
I agree with you dear Calathia about using the ammeter when estimating the power consumption at the wall, I believe what is matter when comes to electricity consumption of the place is the amperage as in the end the type of operating condition is AC one which the voltage is merely constant at 110v or 220v according to that country power regulation.

YES and NO.

You should do extensive testing and write results to excel. My power measurements are very consistent with overclock / soc value changes and I dont trust any OS reading at all anymore.

1) decide your default values (mini_me's recommended values for example)
2) enter those OC values to hive os
3) reboot and wait ~100 shares to system to stabilize
4) measure current with ammeter (put ALL +12V wires of one GPU inside ammeter clamp including riser wires) and take average of ~5s
5) write results to excel
6) change some values and restart from 2)

report your findings to here.

also please report do you also measure ~39W power usage from riser?
Dear Calathia,

If it is possible to test the power table in the kindly attached link below at different TDC limit 112 A, 122 A, 132 A, to check if there is any changs at the power consumption at the wall,


TDC limit at 112 A is purpose to increase the voltage and decrease the amperage at the same specified parameters at 122 A and 132 A,

Your feedback would be highly appreciated.
 
I agree with you dear Calathia about using the ammeter when estimating the power consumption at the wall, I believe what is matter when comes to electricity consumption of the place is the amperage as in the end the type of operating condition is AC one which the voltage is merely constant at 110v or 220v according to that country power regulation.
hmm. I think you misunderstood me. When I recommend using ammeter, its for measuring wires that goes from PSU to GPU. These consist of (usually) 2x 8pin PCI-E and whatever goes to riser card. These have either 0 or 12V wires. you can google PCI-E, ESPV12 and MOLEX wiring diagrams. Here is quick summary:

- "measurement" from OS (hiveOS, windows, etc) is total BS, dont trust
- measuring power usage from wall is ok, but has errors of few percents because when you are changing some OC values, nonlinear power efficiency curve of PSU will distort power difference
- measuring all +12V wires that goes to one GPU with ammeter is best method because you are measuring the true power usage of GPU card.

you can also use these PCI-e, EPSV12 and other power testers that youtube mining channels uses but they are quite expensive.

Dear Calathia,

If it is possible to test the power table in the kindly attached link below at different TDC limit 112 A, 122 A, 132 A, to check if there is any changs at the power consumption at the wall,


TDC limit at 112 A is purpose to increase the voltage and decrease the amperage at the same specified parameters at 122 A and 132 A,

Your feedback would be highly appreciated.
Hi!

I will try this tomorrow. I will finally get another motherboard so I can take one GPU from my rig and do some testings :)
I am now using 1500Mhz stap copy method + tref = 5990. So do I start from your guide values and with these, modify TDC limit 112 A, 122 A, 132 A?

what other test you wish me to make?
 
hmm. I think you misunderstood me. When I recommend using ammeter, its for measuring wires that goes from PSU to GPU. These consist of (usually) 2x 8pin PCI-E and whatever goes to riser card. These have either 0 or 12V wires. you can google PCI-E, ESPV12 and MOLEX wiring diagrams. Here is quick summary:

- "measurement" from OS (hiveOS, windows, etc) is total BS, dont trust
- measuring power usage from wall is ok, but has errors of few percents because when you are changing some OC values, nonlinear power efficiency curve of PSU will distort power difference
- measuring all +12V wires that goes to one GPU with ammeter is best method because you are measuring the true power usage of GPU card.

you can also use these PCI-e, EPSV12 and other power testers that youtube mining channels uses but they are quite expensive.
I understood you very well however it seems that I was not clear,

And you are correct regarding the report of the software as they are most of time not accurate and you are correct regarding measuring the power at the wall as it has a small margin of errors when it comes to changes in the OC parameters because the changes were made are small unlike measuring the power at the rail going to the card which will give an accurate read of the current power consumption of each card and even for the motherboard, SATA drivers snd PCI-e risers as well.

Hi!

I will try this tomorrow. I will finally get another motherboard so I can take one GPU from my rig and do some testings :)
I am now using 1500Mhz stap copy method + tref = 5990. So do I start from your guide values and with these, modify TDC limit 112 A, 122 A, 132 A?

what other test you wish me to make?
Thank you very much for your kind consideration and help,

If it is possible to take a screenshot or photo for the amd-info when capping the SOC max at 950 MHz and 1093 MHz and on default 1267 MHz as well as the power consumption values for a card using a Samsung memory and for a card using Micron memory at these SOC max values, that would be fantastic.
 
Zuletzt bearbeitet :
Here is MSI 5700XT Mech (GPU2) running with SoC at 950Mhz and MVDDC 725mV. Card reports ~90W. Also I'm mining with this GPU for a month at Tmem 102C, no issues.

One question: how do you compare lolMiner to TRM? Thanks.

Annotation 2020-11-17 154803.png
 
Here is MSI 5700XT Mech (GPU2) running with SoC at 950Mhz and MVDDC 725mV. Card reports ~90W. Also I'm mining with this GPU for a month at Tmem 102C, no issues.

One question: how do you compare lolMiner to TRM? Thanks.

Anhang anzeigen 7609
Nice results,

Regarding the soc capping at 950 MHz, it is safe on the long run if it was done from inside the system and not in the bios just like the max voltages for the SoC and GFX which is below 1050 mV in the bios it will permanently degrade the memory modules on the long run and may cause permanent malfunction that might even effect other parts however inside the system we can lower the voltages the way we see fit.
 
Zuletzt bearbeitet :
Oben Unten