Multiple motherboards in one case under a single OS.
verthex

Hello gang!

So I'm wondering what if instead of multiprocessor main-boards under one OS, is there a way build a machine that has multiple main-boards stacked somehow into a case using a large single power supply and have it run off one single hard drive with one OS? I'm wondering if this is possible under Windows, Linux, or some other OS?

8-)

jhuuskon

Sure it is, Gentoo supports it natively.

deps

Only if you network them, and you can't physically connect a harddrive to two boards.
But I'm not a hardware guy so I don't really know, but it sounds silly.

Edit: I stand corrected. :P
Edit again: Guess I wasn't.

jhuuskon

I was kidding dumbasses. What you're building is a cluster computer, I'm sure google can tell you what you need for one to work and and you can do with it.

verthex
deps said:

But I'm not a hardware guy so I don't really know, but it sounds silly.

Think about it this way. One case 10 computers versus 10 computers in 10 cases with 10 separate power supplies. I think the former is a better alternative because it saves space, generates less heat (possibly), maybe cheaper to build since I wont need 500 dollars just for 10 cases.

Great article

Thomas Fjellstrom

You can put multiple machines in few cases. Its done all the time with blade type servers. Theres even a newish kind where they put the power in the server rack itself, and all of the individual blades share the same redundant power supply. Something like 150 or 300 mini blade servers in a single rack. And they are separate systems, each with their own OS install. Whereas there are other types of systems where you can hot swap blade like modules, but they all sit on the same shared bus, and act as a single MONSTER (4096 cups? no problem! multi TB ram? Sure!) system.

None of these machines are affordable to most people though. The most you'll beable to manage yourself (most likely) is a single large case, and a few motherboards mounted inside. It'd take some customizing of the case, and the power supply though. And if you really care about uptime, you'll want a redundant hot-swapable power supply. They aren't cheap though.

type568
verthex said:

Think about it this way. One case 10 computers versus 10 computers in 10 cases with 10 separate power supplies. I think the former is a better alternative because it saves space, generates less heat (possibly), maybe cheaper to build since I wont need 500 dollars just for 10 cases.

The question is why you need'em under one O.S?
You can connect them using Ethernet(for example), network boot from a single HDD..
So generally, it's just a question of a case and a PSU..

verthex
type568 said:

You can connect them using Ethernet(for example), network boot from a single HDD..

Is there a some kind of tutorial for that, thus far I've only seen cluster which require separate hard drives for each board, or actually they just don't explain the logic behind having a single HDD.

type568

Sorry, no. I'm just speaking of general knowledge. Yet, It is obviously not an issue, if you get a proper case and PSU, as I stated before.

Edit:
or actually they just don't explain the logic behind having a single HDD.

The logic of having single HDD, is to save money obviously.. Also effort in some cases. & network boot makes it possible.

verthex

You can put multiple machines in few cases. Its done all the time with blade type servers. Theres even a newish kind where they put the power in the server rack itself, and all of the individual blades share the same redundant power supply. Something like 150 or 300 mini blade servers in a single rack. And they are separate systems, each with their own OS install. Whereas there are other types of systems where you can hot swap blade like modules, but they all sit on the same shared bus, and act as a single MONSTER (4096 cups? no problem! multi TB ram? Sure!) system.

None of these machines are affordable to most people though. The most you'll beable to manage yourself (most likely) is a single large case, and a few motherboards mounted inside. It'd take some customizing of the case, and the power supply though. And if you really care about uptime, you'll want a redundant hot-swapable power supply. They aren't cheap though.

Blade would be nice but I'm looking at it from this perspective right now...

type568 said:

Sorry, no. I'm just speaking of general knowledge. Yet, It is obviously not an issue, if you get a proper case and PSU, as I stated before

I know, a lot of this stuff seems ad-hoc.

Thomas Fjellstrom

If you're really going to do this, and don't want to have a HD plugged into each mobo, plug one into the master node, and setup DHCP/BootP+TFTP and network booting on the rest. Have all of the slave nodes setup to boot off of the nic, and they will automatically query your master node for an ip, grab a boot image off the master, and finish booting. With the right config inside the boot image, each machine can setup their root filesystem over NFS as well. It'd be best for performance if all the machines spoke Gigabit Ethernet, and you put in a real Gigabit Ethernet switch (No hubs). Good thing is a decent 8port GB switch is rather cheap these days, and most motherboards come with at least one built in GbE port.

type568

If you're really going to do this, and don't want to have a HD plugged into each mobo, plug one into the master node, and setup DHCP/BootP+TFTP and network booting on the rest. Have all of the slave nodes setup to boot off of the nic, and they will automatically query your master node for an ip, grab a boot image off the master, and finish booting. With the right config inside the boot image, each machine can setup their root filesystem over NFS as well. It'd be best for performance if all the machines spoke Gigabit Ethernet, and you put in a real Gigabit Ethernet switch (No hubs). Good thing is a decent 8port GB switch is rather cheap these days, and most motherboards come with at least one built in GbE port.

Yawn. Something like that, but he still needs a PSU & a place to store that stuff in.

verthex
type568 said:

place to store that stuff in.

I'm not worrying much about the PSU, although I wonder if there is one for many boards so I can just have a PSU that has something like 10 connectors and can power each board. I'm more worried about storage, not sure how or where I could find one that's prebuilt, besides having to buy a system with it. I guess going to a hardware place and buying specialty metal frames with holes would work. I'm also thinking of liquid cooling and getting a processor from the AMD Black edition for overclocking.

ixilom

There was (they canned the project at 75% completion) a PCI card for PC's ages ago that hosted an Amiga (600 I believe) called Siamese :)

Thomas Fjellstrom
verthex said:

I'm not worrying much about the PSU, although I wonder if there is one for many boards so I can just have a PSU that has something like 10 connectors and can power each board. I'm more worried about storage, not sure how or where I could find one that's prebuilt, besides having to buy a system with it. I guess going to a hardware place and buying specialty metal frames with holes would work. I'm also thinking of liquid cooling and getting a processor from the AMD Black edition for overclocking.

Power supplies barely manage to run a single over clocked cpu these days. AMD's Black editions are all generally 125w or higher (some are 140w), and thats at stock settings. Over clock it and it's likely to jump much higher. Good luck powering multiple overclocked systems with a single PSU.

In the end you're most likely going to have to modify a case and PSU yourself. You might be able to find ATX power cable splitters on ebay or something so you might not have to actually splice in more ATX power plugs yourself.

If I was going to work on a project like this, I'd probably try and find a psu from a disused Blade server box, or even just an old redundant power supply from some server.

verthex

with two of these I might get 8 boards running.

type568

In the end you're most likely going to have to modify a case and PSU yourself. You might be able to find ATX power cable splitters on ebay or something so you might not have to actually splice in more ATX power plugs yourself.

Warning: If you wanna run just one of the mobos at some point, or at least restart one having the rest on, you might need to think again.

Thomas Fjellstrom

You'll be best of going with a PSU that either has a single 12v rail, or maybe two, but makes the amperage EVEN across the rails.

One problem you will probably face is the "power ok", offline power, and other such bits that each motherboard is provided. I'm not sure how well it'll work when you hook up multiple boards to the same PSU.

verthex
type568 said:

Warning: If you wanna run just one of the mobos at some point, or at least restart one having the rest on, you might need to think again.

I'm thinking of keeping the main node running while having the option to power on machines as needed with increased resource need. That would be the most effective system.

One problem you will probably face is the "power ok", offline power, and other such bits that each motherboard is provided. I'm not sure how well it'll work when you hook up multiple boards to the same PSU.

Is there any way to switch the nodes off based on need?

Thomas Fjellstrom
type568 said:

Warning: If you wanna run just one of the mobos at some point, or at least restart one having the rest on, you might need to think again.

There might be issues, but if tbh I think as long as one board is still on, it should keep the psu from shutting off. OR make sure the other boards don't connect to the "power ok" signal and other such signals.

verthex said:

Is there any way to switch the nodes off based on need?

I honestly don't know for certain. the PSUs and mobos weren't designed for this.

verthex

I honestly don't know for certain. the PSUs and mobos weren't designed for this.

What I'm thinking is that, if it was possible to boot a node board off the main node HDD with the other boards staying off, then there wouldn't be a problem with the PSU since the boards that are running only draw the required current. The problem I'm seeing is, how would it be possible to boot a single node if the main one is running already? I wonder if it would be possible to just have a bootable USB for each board? maybe that would work.

Thomas Fjellstrom

The hard drive or storage really has nothing to do with the problem. The problem is what happens when you wire up more than one board to a psu? I don't know.

It is possible that it'll all "just work" because the main/master node is always on. But things might also glitch if they all share the same low voltage signal "special purpose" lines.

You SHOULD be able to wake nodes up using the power switch, and of course wake-on-lan. What happens when they are all sharing the "power ok" and "vsb" type lines, I have no clue.

verthex

You SHOULD be able to wake nodes up using the power switch, and of course wake-on-lan. What happens when they are all sharing the "power ok" and "vsb" type lines, I have no clue.

I think I will try making a cluster with a main node and another mobo, so two computers together and see how that would work.

Thomas Fjellstrom
verthex said:

I think I will try making a cluster with a main node and another mobo, so two computers together and see how that would work.

Well if you do it with two separate power supplies, it'll work fine. Once you get into modifying a psu to get two motherboards hooked up, well there are no guarantees.

verthex

Well if you do it with two separate power supplies, it'll work fine. Once you get into modifying a psu to get two motherboards hooked up, well there are no guarantees.

I see, basically if one machine is off the the whole thing is off too. Yeah, I have to see how that would run first and see if someone has invented some kinda 3rd party PSU kit for it.

Oscar Giner

I see problems with the data wires. You cannot connect 2 input signals into the same place since both signals will collide. You'll need a controller to put between the psu and the motherboards that takes care of this (sends a power-off only when all motherboards are off and such things). I don't know if such controller exists, if not you can try to do it fourself with an FPGA or a PIC chip or similars.

<edit>
When only one motherboard is powered off, this controller would manage it and cut the power to it while of course maintaining the psu on for the others. Or probably it's not even necesary to really cut the power.

Thomas Fjellstrom

I'm not convinced that's nesesary. There are no "data" lines on a Power Supply. If you were feeling generous you could call the "power ok" line a "data" line, but its a single bit (on or off), and never changes once the power levels have evened out after power on.

If I had a use for such a system I'd probably whip one up myself. But I got a new quadcore box to run VMs on so I don't really need a bunch of separate systems.

piccolo

My i ask what trigger you to think of doing this?

verthex
piccolo said:

My i ask what trigger you to think of doing this?

Same as my question, what trigger you to question me?

piccolo

I though you was trying to invent somthing

blargmob

What a stupid idea verthex.

verthex
Blargmob said:

What a stupid idea verthex.

Thank you for your input loser, notice that everyone besides you everyone had something useful to add those this thread, and I suppose all those universities are stupid too, but you say "What a stupid idea verthex." which is something retarded stalker trolls do everywhere I've gone to on the net. Please refrain from replying back to me and thanks for wasting my time.

blargmob
verthex said:

Thank you for your input loser, notice that everyone besides you everyone had something useful to add those this thread, and I suppose all those universities are stupid too, but you say "What a stupid idea verthex." which is something retarded stalker trolls do everywhere I've gone to on the net. Please refrain from replying back to me and thanks for wasting my time.
M

BAHAHAHAHAHAH!!!! ;D ;D ;D ;D ;D

Please cry more. 8-)

p.s. Proofread your posts.

Thomas Fjellstrom

blargmob, seriously you're just as stupid ::)

verthex
blargmob said:

BAHAHAHAHAHAH!!!! ;D ;D ;D ;D ;D

Please cry more. 8-)

p.s. Proofread your posts.

I really wish there was a way to block jerkoffs like you but this is not my site. I don't spend time proofreading posts for someone who is retarded.

BAF

blank stare at blargmob's avatar

Anyhow, you can ignore people, right in the control panel on this site. But we're not supposed to talk about who we've ignored - ML has proclaimed that it is grounds for banning.

verthex

noted, thanks BAF!

Ron Novy

You could use a set of logic gates to control a single powersupply if needed and run them off the +5VSB (usually purple), but if your powering up to 8 boards then you might be better off with 8 cheaper lower watt powersupplies...

Here is an old schematic I did using logic gates. It can isolate the motherboard signals from each other to control the PSU. Not sure if it works in real life, but it seems to function as intended in simulation.

{"name":"599896","src":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/8\/b\/8bf482b1f560eb8fc279cdaaab564e97.jpg","w":652,"h":639,"tn":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/8\/b\/8bf482b1f560eb8fc279cdaaab564e97"}599896

[edit] The capacitors are coupling capacitors. There should be on across the power pins of each IC.

verthex
Ron Novy said:

You could use a set of logic gates to control a single powersupply if needed and run them off the +5VSB (usually purple), but if your powering up to 8 boards then you might be better off with 8 cheaper lower watt powersupplies...

Here is an old schematic I did using logic gates. It can isolate the motherboard signals from each other to control the PSU. Not sure if it works in real life, but it seems to function as intended in simulation.

Thanks Ron, I'll try that for 2 boards and hope they won't fry! What do you mean that it works in simulation, is it P-Spice?

Ron Novy

The circuit was created using 'TINA'. You can download a free version called TINA-TI from Texas Instruments website.

It basically works when run in a simulation under TINA, but I've never actually built the circuit. It should work though. My only concern was that it mixes TTL and CMOS chips.

The circuit could probably be done in a simpler way. Maybe even just 8 diodes and a pull-up resistor from PS_ON to +5VSB... or maybe even without the pull-up resistor... I think the powersupply would have one internally...

Anyway... I thought about it a long time ago, but never dove in to test anything... Just some ideas ;)

verthex

thanks, Ron. I figure the cost of 8 boards and Phenom II chips alone would cost 2400 USD. That doesn't include the cooling water blocks (misc parts), RAM, HDD's (or USB flash boot disks), and one really fast network card for the main node ~300 dollars. So I'd say the total cost with 8 chips, that can be overclocked to 4-6 GHz would be around $3500 USD with about 32 Ghz at the minimum with overclocking and with a high network transfer rate. Cross my fingers.

MiquelFire

You don't understand network booting, do you? With a network boot, you don't need any drives, just ram.

verthex

You don't understand network booting, do you? With a network boot, you don't need any drives, just ram.

No, but thanks for the tip.'m basically trying to get the cheapest system for the most processing speed. This system will be used mostly for numerical methods, PDE's, etc.

Thomas Fjellstrom

Get yourself some GPUs and some Core i7s. Or Xeons, preferably 6-8 core xeons. Quad proc mobos would be good too ;)

verthex

Get yourself some GPUs and some Core i7s. Or Xeons, preferably 6-8 core xeons. Quad proc mobos would be good too ;)

Numerical computation is all CPU, so I would only need one GPU and AMD's are supposedly more easy to overclock than intel???

ImLeftFooted

Ah interesting. So the ignore list doesn't hide posts but it does block people from posting on your thread.

verthex

yes as a fan of Jim Jong, I too want a socialist thread and have power over my community ;) :-/, anyways Xeons are super expensive, does anyone know why?

ImLeftFooted

If a child is conceived in the same building as a 4 CPU x 8 core Xeon, will a communist dictator be born? The world may never know...

verthex

If a child is conceived in the same building as a 4 CPU x 8 core Xeon, will a communist dictator be born? The world may never know...

Nope, Kim was born before the processor or the intel x80 at least.

BAF
verthex said:

Numerical computation is all CPU, so I would only need one GPU and AMD's are supposedly more easy to overclock than intel???

No it's not. I take it you've never heard of CUDA. If you're looking for quick and cheap box with lots of calculation power, I'd say grab some powerful video cards.

Thomas Fjellstrom
verthex said:

Numerical computation is all CPU, so I would only need one GPU

What mathmatics deals with strictly non floating point integers? GPUs are insane at floating point. The new Nvidia chip is supposedly a monster at number crunching in double floating point calculations.

Quote:

and AMD's are supposedly more easy to overclock than intel???

What does overclocking matter when a 4Ghz intel i7 or Xeon is faster than even many overclocked AMD models? Can't wait to see if AMD's new family of chips will manage to close the gap with the i7's but right now its no contest. The i7 platform wins hands down.

verthex
BAF said:

No it's not. I take it you've never heard of CUDA. If you're looking for quick and cheap box with lots of calculation power, I'd say grab some powerful video cards.

I guess, although I still wonder what the point of Xeons are, since only I need one fast network card, not 8 network chips.

What does overclocking matter when a 4Ghz intel i7 or Xeon is faster than even many overclocked AMD models? Can't wait to see if AMD's new family of chips will manage to close the gap with the i7's but right now its no contest. The i7 platform wins hands down.

I was hoping to get into the 5.5 GHz range with Phenom II by cooling the CPU down to -40 C range. I heard that's possible with methane and special phase cooling systems, a regular condenser unit from a fridge would also work...

BAF

Xeons are CPUs......

And what exactly is the point of this system? Overclocking a Phenom II to 5.5GHz hardly sounds like it is of the utmost reliability. It also sounds like a big power waster - good luck running handfuls of such setups on one PSU. :P

verthex

I'm actually thinking of buying this now. It just doesn't say how fast it is? 250X what PC?

edit: according this article, the equivalence of flops to ghz is "1 TFLOPS at 3.13 GHz" (although I assume this is possible because the CPU is 3.13 and allows the GPU to run as fast). But is every calculation in an executable a FLOP????

BAF

Why do you want to build this again?

verthex

Making grids of finite elements for PDE's. I guess they use a lot of FP calcs anyways, so CUDA would be helpful, I just can't get over the fact that I need a CPU to have a GPU, what is the reason for that and why do they even bother then if GPU is fast in the first place.

BAF

Because the GPU isn't as general purpose nor as easily programmable? You need a CPU to have a hard drive too, what's your point? :P

Thread #602353. Printed from Allegro.cc