[SOLVED] Installation difficulties with 12 AMD GPU on Biostar TB250-BTC PRO

setup
install
12-gpu
failure
boot
amd
#1

English speaker here.

Followed all your BIOS configuration steps (primary GPU set to use PEG/x16 slot, PCI-E speed set to Gen2, CSM enabled for Other OS installation, unneeded devices disabled, etc.).

  • All GPUs are brand new AMD RX-570 and RX-580 (not modded), and risers are working.

I am using hive-flasher to deploy a large number of GPUs on Biostar TB250-BTC PRO motherboards (upgraded to latest BIOS version).

Here are the issues:

  • First of all, the instructions for FARM_HASH usage (on this forum) and hive-flasher (on the GitHub page) are inaccurate and inconsistent. You need to update your instructions on both pages for consistency. From trial and error, I discovered that one must fill out BOTH the “rig-config-example.txt” and “rig.conf” files providing the FARM_HASH and password. hive-flasher will not let you proceed unless FARM_HASH and password are filled out in rig-config-example.txt, and HIVE will not properly add the rig to the web dashboard without the rig.conf file.

  • Second issue: Unable to install the image when motherboard is set to use PEG/x16 slot for monitor. It tries to boot into the USB and always ends up with a frozen screen with white noise (but will install fine if motherboard is set to iGFX or PCI). Here is a photo:

  • Third issue… As another user already said before: The rig does not always boot into GUI with more than 6 GPUs connected (if it’s 8, 10, or 12 GPUs) - the rig will not boot into GUI or connected monitor. Another user has posted a photo of this issue, and it looks like this (black screen with blinking cursor):

  • Fourth and final issue! ALL of the GPUs report the following error: Invalid PCI ROM header signature: expecting 0xaa55, got 0xffff

Can anyone please educate me on what might be going wrong? My biggest concerns are the third issue… where the rig does not boot into the GUI (or connected monitor) with more than 6 GPUs, and the fourth issue (Invalid PCI ROM header signature error).

I would appreciate prompt help, seeing as I am setting up a large scale mining facility and I have a deadline to finish everything. If I do not receive adequate help, I will have no option but to move to SMOS or minerstat.

Thank you.

#2

I have same motherboard with 8gpu and no problem. PCI-E speed set to auto.
Black screen suggest that you may have one riser badly in socket. Start with half of GPU’s and if work ad more , one at time.
In main PCI-E installed GPU may need dummy if started without monitor.
You can install image in another PC and when ready connect ssd to miner.
Fourth problem is not problem, i have same.

#3

Thank you for the reply.

  • What did you set your Primary Display to when running the installation? Auto/iGFX/PEG/PCI/SG?

  • I’m using hive-flasher like I mentioned earlier… to write to SSD. Apparently it matters what the Primary Display is set to in BIOS.

  • Also, like I said - PEG (x16 slot) does not work for the setup. I tried with Auto and Gen2… and I get the same result. Only iGFX and PCI work.

Edit: I’m thinking now… that the reason PEG doesn’t work, is because an x1 riser adapter is connected to it, as opposed to a full length x16 card (which I cannot connect due to space limitations)

  • There are no loose risers. I have triple checked them all. All 12 GPUs show in POST.

  • The black screen with blinking cursor is apparently the GUI switching from ‘internal to external GPU’… (according to someone else on this forum). I tried switching the HDMI cable to all 12 GPUs, one by one, after the screen went black with a blinking cursor, and I still did not get anything on my monitor. I tried this a few times, because it always happens to me with 12 GPUs connected. It works with less than 8 GPU, but with 12 - always goes to the black screen with the cursor.

Any ideas?

#4

I cant remember exactly witch one i set to primary, but it wasn’t internal graphic cos miner didn’t like it. I can check exact bios settings at weekend.
Disable any hibernation and sleep modes too :slight_smile:

#5

Thanks for checking! Let us know your settings when you get a chance.

And I always disable all power saving / hibernation when I initially configure the BIOS. That includes C-States.

#6

I have this setting.
This time as soon it starts mining, screen goes black and cursor stays top-left coner. This is probably because hive2 . I connected first time monitor to rig with hive 2.

#7

Hey, thanks for the update. Do you have a full length card plugged into the main PCI-E x16 slot?

For me, PEG display does not work when an x1 riser adapter is plugged in to the x16 slot.

#8

I have rx 480 with riser pluged in.

New hiveos is out too, with new kernel

Maybe this will help. I installed it today and ethminer works again :slight_smile:

#9

No luck for me. The latest version, 0.5-72, is sending my rigs into a constant reboot loop: Boots, connects, and then reboots. And keeps looping indefinitely.

Lack of developer support on this forum has practically shown to me that I cannot rely on this platform, and I think that my money will be better spent elsewhere.

Sorry guys, but I’m not able to make any sense of your buggy software. Time to look for something more reliable.

#10

P. S. Installation with PEG as the Primary Display still doesn’t work with the latest HIVE image. This is what always happens:

The only way the installation works is by selecting PCI as the Primary Display.

#11

Did you disable internal graphics too?
I look at pictures and i have PEX16_1 made as root.
Have you tryed booting from ssd or hdd? I had problems with USB so changed to ssd

#12
  • Internal graphics: Disabled
  • Primary Display: PCI (the only one that works)
  • HIVE installed to SSD

And like I said, the latest version of HIVE (0.5-72) sends the entire rig into an endless boot loop.

Older version (0.5-57) did not have the boot looping issue, but had a very difficult time installing on a fresh 12 GPU build no matter what the BIOS settings were. I described all the issues in my initial post.

#13

Looks like a new image was just posted, in hopes of fixing the bugs I just described… 0.5-73

0.5-73 - Hotfixes, missing symlinks, libraries, dependencies, Ubuntu 18.04 compats, etc

#14

Nope, no luck with 0.5-73 either. Rigs go into an infinite reboot loop after the interface is loaded…

@HaloGenius - are you able to help?

#15

[SOLVED] Here’s how:

I found out the hard way… that running 12x MSI RX 580 GPUs was drawing too much power from the power supplies. The reason why the rigs would go into boot loops, is because the total power drawn from the power supplies was greater than was immediately available. I had spec’d the power supplies to provide enough juice to the GPUs, but with not too much overhead.

  • The solution was, to immediately go to the Overclocking tab of each new rig as it booted, and change the Core State Index value to:

  • “1”.

This would allow each rig to run without crashing and/or endlessly boot looping.

A value of “1” instructs the GPUs to reduce power in idle cores, while a value of “7” would increase power consumption drastically on all cores (5 is the default value).

So, in order to be able to work and configure each 12 GPU rig, I had to start each rig with as little power consumption as possible.

#16

Great that you got it up and running.
Try aggressive undervolting too

#17

Advice on RX 570 and RX 580 overclocking would be greatly appreciated. I have just over 450 GPUs at this facility that I’m launching - mix of MSI 570/580.

#18

Check this topic then

#19

I’d like to eventually get into BIOS modding, but right now I’m pressed for time and have a deadline to finish building everything. So for now, I’m just looking for safe and effective OC values (Core, Voltage, Mem, etc.)

Any recommendations? I found these GREAT values for the RX 580 (working like a charm at 30 MH/s even without BIOS modding):

xclusw592xjy

But I didn’t see anything like that for the RX 570.

#20

How long you have mined with this setting? Any invalid shares?