Six computers in one: Mini ITX Clusterboard - YouTube

Channel: unknown

[0]
this mini itx motherboard has six
[2]
computers on it and if i flip it over
[5]
there are six ssds on the back what time
[8]
is it it's clustering time this is the
[11]
super 6c a raspberry pi cluster board
[14]
like the turing pi 2 supercomputer i
[16]
built last year it fits in normal pc
[18]
cases but unlike the turing pi 2 it
[21]
holds two more pi's it fits inside a 1u
[24]
enclosure and you can buy it right now
[25]
for 200 bucks a lot of people test out
[28]
things like kubernetes or k3s on a
[30]
little cluster like this but today i'm
[32]
going to explore something a little
[34]
different i'm going to test this thing
[35]
by turning it into a ceph storage
[37]
cluster so i can pool together all these
[40]
nvme drives and we'll see if it can even
[42]
play in the same league as a 12 000
[46]
storage appliance spoiler it can't but
[49]
we'll get to that i bought my super 6c
[51]
from dwm zone but deskpi also sells it
[54]
on their website it comes with a 100
[56]
watt power adapter some standoff so you
[58]
can run the board without a case and a
[59]
set of screws to mount up to six m.2
[62]
ssds on the bottom there are spaces for
[65]
up to six raspberry pi compute module
[67]
fours that's this little guy which is an
[70]
entire computer that's smaller than a
[72]
playing card and the super 6c has a
[74]
built-in gigabit switch now that doesn't
[77]
take all the pies and put them together
[78]
and make one giant pie you still have to
[80]
manage the pies yourself it's just this
[83]
board makes it a lot simpler since you
[85]
wouldn't need to buy six pi fours six
[87]
power adapters six ethernet cables and a
[90]
separate network switch that's how i
[92]
built my raspberry pi drambl cluster
[94]
back in the day but after using this
[95]
board i don't think i'll ever go back
[98]
this board is truly a cluster in a box
[100]
and it's super thin i'm gonna install it
[103]
in this tiny pc case and i also designed
[106]
an io shield actually i had to design
[109]
four versions before i could get this
[110]
one that fit because the tolerances for
[112]
some of the ports are a little off while
[115]
we're looking at the ports i guess i'll
[116]
talk about io there's the power
[118]
connector that accepts 19 to 24 volts
[120]
and i'll talk more about power
[122]
consumption later there are two gigabit
[124]
ethernet ports connected directly to
[126]
this realtek ethernet switch chip which
[128]
connects through to each pi at a full
[130]
gigabit then there are two hdmi ports a
[133]
micro usb port two usb 2.0 ports all
[137]
connected straight through to cm4 number
[139]
one meaning you can manage this entire
[141]
cluster standalone if you plug in a
[143]
keyboard mouse and monitor finally on
[145]
the back there are these six green leds
[147]
and they show you the activity for each
[149]
pi in the cluster but if you like leds
[151]
that's not all you get each pi gets its
[153]
own set of four leds two for power and
[156]
activity and two for ethernet each pi
[158]
also gets its own little micro sd card
[160]
slot on the back which is useful if you
[162]
have light compute modules and each pi
[164]
gets its own micro usb port on the top
[166]
side for flashing modules with emmc
[168]
storage the board has its own little pmu
[171]
or power management unit and it comes
[173]
with a power and reset button over here
[176]
you can also plug in front panel
[177]
connections but i noticed the power led
[179]
and any fans you have plugged in are
[181]
always on that means it can be hard to
[183]
tell whether the cluster's running if
[185]
you have it installed inside a case but
[187]
you can turn on the cluster by pressing
[188]
the power button and you can force shut
[190]
it down by holding down the power button
[192]
for five seconds i asked despite if
[194]
their board's firmware is open or if
[196]
they have any remote access features
[198]
planned but they said no also the
[200]
onboard ethernet switch is not managed
[202]
so you can't do things like set up vlans
[204]
or do any other advanced routing and no
[207]
just plugging in two connections to a
[209]
gigabit switch won't double the
[210]
bandwidth of this board i actually
[211]
tested that both of those features are
[214]
coming on the turning pi 2 so with all
[216]
these kinds of boards there are always
[218]
trade-offs i mean looking at the board
[220]
space i can understand trying to cram
[222]
every feature possible for six computers
[225]
on a mini itx motherboard is pretty much
[227]
impossible so i plugged all the pies in
[230]
on the top but i couldn't use the
[231]
heatsinks i normally use so for now i
[233]
popped them off and mounted a big fan to
[235]
the case i actually started with a noxua
[237]
but even with its quiet adapter cable it
[239]
was a little bit loud since this board
[242]
doesn't have pwm i swapped in this
[244]
arctic f12 silent it was almost silent
[247]
everything stayed cool even without heat
[249]
sinks on the bottom i installed six of
[251]
these these are keoksia xg6 nvme ssds
[256]
and they're overkill for raspberry pi
[258]
but since i had a couple on hand already
[259]
kyocera reached out and sent four more
[262]
so if you see the includes paid
[264]
sponsorship thing on this video that's
[265]
why they didn't pay me but they did send
[267]
me four ssds to fill up the rest of
[269]
these slots after i put the drives in i
[271]
flashed 64-bit pios to six micro sd
[274]
cards and put one in each slot when i
[277]
flashed the os i made sure to add host
[279]
names like desk pi 1 desk pi 2 and so on
[282]
that way when it comes time to connect
[284]
to them i don't have to figure out their
[285]
ip addresses i think if i had one major
[288]
complaint about the hardware it's how
[289]
hard it is to access the microsd card
[291]
slots they're all at different angles
[293]
and since they're flat against the
[295]
bottom i can't even fit my fingers in to
[297]
remove a card a couple times i even had
[299]
to use a spudger to get the card out but
[301]
now with everything put together it's
[303]
time to mount this thing in my case i
[305]
found this cheap mini itx case on amazon
[308]
and i'll put a link to it below i
[310]
slipped in my 3d printed i o shield put
[312]
in the motherboard and screwed it in i
[314]
plugged in the front panel connectors
[315]
but then when i went to plug in the
[317]
front panel usb 2.0 plug i noticed the
[319]
header on the motherboard had all the
[321]
pins on it it's supposed to be a keyed
[323]
connector with one pin missing so i
[325]
couldn't plug in the usb connector i
[327]
also asked desk about this and they said
[329]
that this was a problem with the first
[331]
production run but it's been fixed now
[333]
so on to the first boot when i plugged
[336]
it in the front panel power led lit up
[338]
and the fan started spinning right away
[339]
it looks like those are just always on
[341]
when i press the power button all the
[343]
pies lit up almost at the same time and
[344]
it was kind of fun and mesmerizing
[346]
watching all the blinking activity leds
[348]
on the back it reminded me a little of
[350]
whopper from the movie war games the
[352]
whopper spends all its time thinking
[354]
about world war three i also measured
[356]
power consumption and found that the
[358]
board uses less than a watt powered down
[360]
with just the fan running about 17 watts
[363]
with six pies running 24 watts max and
[365]
11 watts if you shut down the pi's but
[367]
don't power off the cluster using the
[369]
power button one thing i don't like
[371]
about the design is how the ethernet
[373]
ports on the back don't have status
[374]
lights i had to check the other end of
[376]
the cable on my switch to make sure it
[378]
was actually connected but everything
[379]
booted up great except for one of the
[381]
cm4s had a bootloader issue which meant
[383]
its activity led would just light up
[385]
green and it wouldn't boot after a quick
[387]
trip over to my pie tray to reflash the
[389]
pi's firmware i got it booting too and a
[392]
few quick notes about board power it
[394]
doesn't look like there's a way to just
[395]
shut off power to one pi at a time so
[398]
hot swapping raspberry pi's doesn't seem
[400]
like a very safe thing to do on this
[401]
board on the turning pi 2 it has more
[404]
advanced firmware so you can control
[405]
power to each slot that's not a huge
[408]
issue here but it is something to be
[409]
aware of and you know how i mentioned
[411]
this board doesn't just take six
[412]
raspberry pi's and slap them together
[414]
into one big raspberry pi well the
[417]
natural question then is how am i gonna
[419]
manage six raspberry pi's well you could
[422]
log into each one with ssh but in my
[424]
case i'm going to use a tool that's
[426]
perfect for managing a pie cluster
[428]
ansible
[430]
i have a whole book and even a youtube
[432]
series on ansible and i'll link to them
[433]
below but to get started all i need to
[436]
do is create an inventory file to tell
[438]
ansible where to find the servers that's
[440]
this file here and then a playbook to
[442]
tell ansible what to do with them i
[444]
won't get too deep into it here but i've
[446]
posted the entire playbook in a more
[448]
detailed guide on my github so check the
[450]
link below the first thing i wanted to
[452]
do was make sure all the pies were up to
[454]
date running the latest version of
[455]
raspberry pi os so i wrote this playbook
[458]
and ran it on the pi's it updates them
[460]
all and automatically reboots them if
[462]
they need to then i wanted to try out
[464]
some software i've never used before
[466]
ceph ceph is a distributed storage
[469]
system you might know raid where you can
[471]
plug multiple hard drives into one
[473]
computer and mash them together for
[474]
redundancy or performance well ceph is
[477]
like that but instead of multiple drives
[479]
on one computer you can put multiple
[481]
drives from multiple computers together
[483]
in a storage array and let ceph deal
[485]
with redundancy and networking i
[487]
followed the instructions from this blog
[489]
post and used a tool called seth admin
[491]
to set everything up first i just tried
[493]
installing safadmin but app said it
[495]
couldn't find it i actually had to turn
[497]
on the unstable debian repository since
[500]
cephadmin is so new it's not in the
[502]
repos that ship with the raspberry pi
[504]
yet but i did that updated apps cache
[506]
and installed seth admin without any
[508]
problems then i ran this bootstrap
[510]
command and to get the cluster setup i
[512]
had to know the pi's ip address so i
[514]
hopped over to another terminal and
[516]
grabbed that it took about five minutes
[518]
but at the end it spits out some
[519]
information for accessing the ceph
[521]
dashboard in a browser before doing that
[523]
i had to copy the seth public key from
[525]
the main pie to all the other pies but i
[528]
didn't have that pie set up to log into
[530]
all the other pies yet so i decided to
[531]
just do all of it in ansible since that
[533]
would make the process easier so i wrote
[535]
this task that saves the public sef key
[537]
from the main server and then this task
[539]
that tells ansible to add it to the
[541]
authorized keys for the root user on all
[543]
the other pies this way ceph will be
[546]
able to manage all the other pies when i
[548]
connect them and it was a good thing i
[549]
was using ansible because i realized
[551]
after i logged into the dashboard that
[553]
you had to have some other stuff
[554]
installed too before seph can work on
[556]
all those pies it was saying it needed a
[558]
container engine and also something
[560]
called lv create which is part of the
[562]
lvm2 package so over in ansible i added
[565]
this test that makes sure podman and
[567]
lvm2 are both installed on all the pies
[570]
alright with all that done i could add
[572]
all the hosts it took a few minutes for
[574]
seth to get them all healthy but after
[576]
five or so minutes they all showed up
[578]
and seth could tell me how much raw
[580]
storage was on each pie looking over at
[582]
the main dashboard the cluster status
[584]
was now okay and it showed the total raw
[586]
capacity available to seth is 3.3 tbs
[590]
not bad i wanted to get nfs working so i
[592]
could test everything for my mac on the
[594]
network so i tried that but i kept
[596]
getting this error i even tried
[598]
installing some missing packages but
[599]
with that it still wouldn't create the
[601]
nfs service so instead i set up a local
[604]
storage pool and ran benchmarks using
[606]
ceph's built-in benchmarking tool
[608]
write speeds were around 70 megabytes
[610]
per second and read speeds were around
[612]
110 megabytes per second that's better
[614]
than i thought i'd get and in fact
[616]
that's about how much speed i could get
[618]
through a gigabit network so i'm not
[620]
really disappointed encryption or other
[622]
fancy features will slow it down but
[625]
getting 110 megabytes per second on a pi
[627]
is pretty decent and you might be
[629]
tempted to think just plugging in two
[631]
ethernet cables into the board will
[633]
double the network bandwidth but it just
[635]
doesn't work that way since the switch
[637]
on the board isn't managed you can't
[639]
configure anything like link aggregation
[641]
so at best it'll just provide a
[643]
redundant network path i actually tested
[645]
this out and the total speed when
[647]
running iperf between my mac and the
[649]
board no matter how many pi's i
[650]
connected to always maxed out at one gig
[653]
and to prove it's not just a weird
[655]
networking issue on my mac i even ran a
[657]
benchmark across two switches to two
[659]
computers simultaneously and total
[661]
throughput was always under a gigabit in
[664]
fact sometimes the network performance
[666]
was a little worse and i wonder if the
[667]
switch chip might have been overheating
[669]
a little it only got up to about 50
[671]
degrees celsius in my thermal testing
[673]
but maybe it doesn't like that much
[675]
switching not sure so after all that
[678]
could this thing replace a 12 000
[681]
enterprise storage appliance like the
[682]
mars 400 in a word no and there are two
[686]
main reasons for that networking and the
[688]
i o limitations of the raspberry pi the
[691]
mars 400 has less memory and fewer cpu
[693]
cores but it has faster internal
[696]
ethernet connections plus four 10
[698]
gigabit uplinks so this thing could pump
[700]
through gigabytes per second the pi
[702]
can't really touch that especially since
[704]
the super 6c can only put through one
[707]
gigabit over the network so this board
[709]
is an interesting one the turnpi 2 is a
[712]
very different product it has managed
[714]
ethernet so you could assign vlans or
[715]
setup link aggregation this board has a
[718]
dumb switch so having two ports on the
[720]
back isn't actually all that useful the
[722]
turing pi 2 can work with other form
[724]
factors like the jets and nano the super
[726]
6c only works with cm4 compatible boards
[729]
and the turing pi 2 has other neat
[731]
tricks like mini pcie slots which i used
[733]
to build a remote 4g kubernetes cluster
[736]
that i ran out on my cousin's farm so
[738]
both of these boards have their virtues
[740]
but i think the biggest thing going for
[741]
this board is how thin it is and how it
[744]
crams six cm4s on one board it can be
[747]
very useful for experimentation or even
[750]
for some types of edge computing oh also
[753]
you can buy it right now and you don't
[754]
have to wait for a kickstarter assuming
[756]
raspberry pi's ever become available
[758]
again i think this board would be ideal
[760]
for doing things like learning ceph or
[762]
kubernetes or other clustering tools
[764]
it's certainly a lot less hassle than
[766]
putting together separate raspberry pi's
[768]
power supplies ethernet cables and a
[771]
switch if you want to pick one up check
[773]
out the link in the description i've
[774]
also linked to all the other parts i
[776]
used in my build and all the guides and
[778]
code used in this video until next time
[781]
i'm jeff gearling