Ubetcha Ad Banner

Please visit our sponsors :
Support my blog please, click here

Monday 30 May 2022

Storage server - Dell R515

In a attempt to create a ('huge') data pool for backups, I've been keeping an eye out for a new server. I kinda have a figure of €400 in my head that I'd like to have as a maximum budget for doing this. My criteria for this server was the following;

  1. Low power consumption
  2. 8 x LFF 3.5" drive bays
  3. As always, cheap to buy!
These reasons are pretty much self explanatory, with maybe the exception of the second point? Let me briefly explain. SFF (2.5") hard drives are mainly used for smaller capacity hard drives when compared to LFF (3.5") hard drives. SFF bays are primarily used with SSD's these days. I think the average 'maximum' capacity SFF HDD is 1.2TB or there abouts. Obviously multiplying the drive capacity by the number of drive bays will give you the total capacity of the system. SFF is generally a smaller capacity, but faster speed array whilst LFF is slower but increased capacity.

For a long while I was looking into buying a HP's 8th generation, DL3x0 server. I had previous experience with a 6th gen DL360 so it would basically be an updated version of that. However these are mainly SFF servers (although you can, and I did find a couple of LFF variants). Ultimately what put me off this HP server was it's processor(s) power draw. It wasn't signifyingly high, don't get me wrong and it was lower than the 6th generation server I had previously. I had read that the AMD processor machines consumed up to, a third less power. We'll be able to look into this in details later on once the server is up and running.

The front of my new server, arrived with all caddies (no HDDs) and the front bezel.


Inside the Dell Poweredge R515 server


Cost so far, Dell PowerEdge R515 server, 8 x 3.5" drive caddies, H700 PERC controller, shipping from Bulgaria, total all in cost of €228.49. Yes, I agree that doesn't seem to fit my criteria 'Cheap to buy'. Lets take a quick look into that. €8.59 (x8) is the cheapest Chinese 3.5" caddy clone available on eBay (deduct €68.72 from servers cost = €159.77). Server's are heavy, these are not cheap things to send in the post!! (deduct, let's say €30 for postage = €129.77). Even at €130 I'm feeling that's not a bad price for a server?

O.K. So you've got  your hands on your new server.. what next? The first thing that I did was to remove the 4 x 4GB PC3 ECC memory and replace it with 4 x 8GB PC3 ECC memory, doubling the servers RAM from 16GB to 32GB. Then I wondered, would the server take those 4GB DIMMs in addition to the 8GB DIMMs I just installed.. and the answer is yes but there is a 'but'!

I found the memory to be a bit of a pain, not the installation, that was fine, just the servers actually operation of inserted memory that was a little strict. For instance 2 CPUs need all the white memory slots populating (2 slots per CPU) with identical memory. If you only install into the first slots (A1 and B1) it'll annoy you with the message 'Memory is not optimal' on boot and ask you to press F1 to continue or F2 for setup. In the end I populated all the white memory slots with 8GB DIMMs (4*8=32GB) and all of the black memory slots with 4GB DIMMS (4*4GB=16GB), again getting the 'Memory is not optimal' warning. So by removing the 4GB DIMMS and partially loaded 2x8GB DIMMS in A3 and B3 respectively.. again facing the 'Memory is not optimal' message.. You can work around this by disabling keyboard errors in the BIOS. I did this but later started to wonder if it would have performance issues or not, then for the sake of me order another 2x8GB DIMMs, I would end up with 64GB of memory and no annoying BIOS messages and also not having the need for the disable keyboard work around, so everything will be 'as intended'.

In the system BIOS you can disable the keyboard prompt on error


Cost so far, 2 x 8GB DIMMs €28 bringing the total to €256.49 (64GB system memory)

This server came to me with a H700 PERC controller, obviously this is a RAID controller would be no good for HBA / JBOD mode that is ultimately what I'm looking to utilise for my ZFS array using TrueNAS. I've ordered a Dell H200 card which I'll reflash with the BIOS from LSI's 9211-8i once it arrives.

H700 PERC controller removed AND server's LED panel reporting this occurrence


Cost so far, H200 controller card €32 brings the total to €288.49.

Things are certainly starting to add up on this server build and I've still no storage, let's address that now. So I was looking to install four drives, leaving four bays free for future expansion. I've heard stories about a 2TB drives being the worst you could possibly buy in terms of reliability but for the life of me, I couldn't remember what brand they were talking about? I didn't really over think this, I didn't bother to research it any further. Instead I put it down to, it's likely a particular brand and not a general rule of thumb (I hope) also I came across five 2TB HGST drives for €79.97 so ordered these and kept my fingers crossed! The idea being here, I can install four drives and have the fifth drive as a spare. In order to do this properly (correct cooling airflow inside the server) I would also need to purchase four drive bay 'blanks'. These are used to block the air coming in, in turn this forces the air to come in through an actual drive bay thus cooling the drive.

Cost so far, 5 x 2TB HDDs €79.97 + 4 drive bay blanks €13, brings the total to €381.46.


The final piece of the puzzle that we need to discuss is the operating system, I've mentioned previously that I intent to run TrueNAS (TrueNAS Core to be specific) and although I could be tempted to virtualise this installation of TrueNAS under a hypervisor such as ProxMox, I don't think that I'm going to. Primarily this server will be backups of everything that I have and use. I feel this is a good enough purpose for this server without diluting it and using it for other purposes. I'm even intending to eventually locate this server in a different building so it'll also count as 'off site' backup as well. (located in the garage vs my home!)

Obviously the operating system itself has to be stored somewhere and it cannot be connected to the main storage array, solutions? I've an idea for now, but this may change in the future. My initial idea is to run TrueNAS from a USB stick, plugged into one of the two internal USB slots inside the server. As a backup I'll actually mirror the boot USB and keep one as a spare in the drawer. This is an integrated feature in TrueNAS (and has been since the days of FreeNAS - How to video here). If this idea doesn't work out then my change of plan, will be to use a USB to SATA converter plugging into the internal USB slots and either a HDD or a SSD. Obviously the USB stick idea will be more cost effective than a SSD!

An example of my fall back solution


Hopefully my grand total cost, 2x8GB USB sticks €12.99 bringing the grand total to €394.45.

LETS GET STARTED - BIOS Update

System BIOS was previously v2.0.3 with the latest available on Dell's website being v2.4.1 - interesting note here is the default download is 8,793kb (nearly 9MB) which is rather large for a BIOS I thought? It is, in-actual fact the installer, the BIOS flasher and the whole software bundle for doing this inside of Windows 10 64bit. As I currently have no access to internal hard disk drives this clearly wasn't an option. Well it kinda could of been if I went down the lengthy path of attaching a USB hard  drive and installing Windows 10 onto that, however instead, I choose to create a FreeDOS bootable  partition on a spare USB memory stick using Rufus. On Dells BIOS download there is an option 'other formats' which will take you to a popup listing a smaller .exe file and a .pdf file. The PDF basically explains the large .exe is for 64bit Windows and the smaller .exe is the DOS BIOS updater. So then I simply copied the smaller .exe on my newly make bootable Free-DOS USB stick, booted and updated the BIOS in a fraction of the time.

 



 


With the BIOS updated I started to wonder if there was any other things that I could update, firmware perhaps? I remember in the past (with that older G6 HP server I had previously) I downloaded an AIO .ISO updater boot disk from HP that scanned my system and updated everything that it could find. Surely there must be such a thing for a Dell system?

 


Again utilising Rufus to create a bootable image to get this image onto the server. However, maybe I downloaded the wrong thing? or maybe I blatantly just don't understand this updater (it's far from automated) and when selecting the USB as the repository.. computer says no.. I couldn't really figure out what I needed to do. I even tried entering downloads.dell.com as a CIF/NFS share because I read somewhere that Dell have retired their FTP server. Putting it simply, I just gave up!

 

 


 


The keen eyes may of spotted I had the server running from a digital power usage meter, I've already said we'll be using that to do some power tests once the server is fully operational but as a teaser, here's the power reading whilst attempting to update the firmware.

87.8W.. Wow

UPDATE : So I am updating this on the 1st of June, my two sticks of 8GB RAM have arrived, were installed without issue and are working perfectly.

 
RAM installed without issue.


In addition, the H200 controller card has also arrived. I removed the end plate and installed it exactly where the previous H700 controller card was located.

Like for like comparison (H200 top, H700 bottom)


 


However upon powering on.. Huston, we have a problem!! Now admittedly I haven't done anything with card (no reflash as yet) as I just wanted to test to see if it was working or not for eBay feedback purposes.

A quick search later and I came across this useful video by 'Art of Server' which basically told me that the server is looking for a specific tag from the hardware installed into that particular 'internal' slot. In this case the server doesn't find that particular information tag, so just refuses to boot.

Sounds a little drastic admittedly AND also complicated but I'll try to make it a little easier to digest. Basically I needed to change the hardware tag in the controller cards firmware. Firstly by downloading it, then editing it and then reuploading it is the simplest way I can explain it.

To do this I needed an operating system, plus then I needed to get files from the internet onto this 'temporary' operating system in order to achieve all this with the controller card... Linux Mint to the rescue! I happened to have a burnt disc of Linux Mint laying around but if you don't have one of these you could always download a copy and use Rufus again to make a bootable USB like we've done before.

 
The solution follows ;-D


The solution, initially, is to move the controller card into another slot (like one of the regular slots near the back of the server) now its only a temporary move and cannot stay here because the SAS leads will not reach. If your leads reach and if your happy enough leaving your controller near the back then you can skip this next segment.

Modifying the H200's hardware tag:

- Boot in Linux (which ever distro you like) as I said previously, I used a burnt disc of Linux Mint.

- We need to Install build tools: 
# apt install build-essential unzip



- Compile and install Isirec and Isitool:
# mkdir lsi
# cd lsi
# wget https://github.com/marcan/lsirec/archive/master.zip
# wget https://github.com/exactassembly/meta-xa-stm/raw/master/recipes-support/lsiutil/files/lsiutil-1.72.tar.gz
# tar -zxvvf lsiutil-1.72.tar.gz
# unzip lsirec-master.zip
# cd lsirec-master
# make

(If 'stdio.h' error here install # sudo apt-get install build-essential)

 


# chmod +x sbrtool.py
# cp -p lsirec /usr/bin/
# cp -p sbrtool.py /usr/bin/
# cd ../lsiutil
# make -f Makefile_Linux

- Modify SBR to match an internal H200I, to get the bus address:
# lspci -Dmmnn | grep LSI

0000:05:00.0 "Serial Attached SCSI controller [0107]" "LSI Logic / Symbios Logic [1000]" "SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] [0072]" -r03 "Dell [1028]" "6Gbps SAS HBA Adapter [1f1c]"

Bus address 0000:05:00.0 (In my own case it was 0000:02:00.0)
We are going to change id 0x1f1c to 0x1f1e (Again it was 0x1f1d to 0x1f1e for me) 
** Disclaimer - Your bus address and id's could be different, make sure you use your own details **

 


- Unbind the card:
# lsirec 0000:05:00.0 unbind

Trying unlock in MPT mode...
Device in MPT mode
Kernel driver unbound from device

- Halt the card:
# lsirec 0000:05:00.0 halt

Device in MPT mode
Resetting adapter in HCB mode...
Trying unlock in MPT mode...
Device in MPT mode
IOC is RESET

- Read SBR:
# lsirec 0000:05:00.0 readsbr h200.sbr

Device in MPT mode
Using I2C address 0x54
Using EEPROM type 1
Reading SBR...
SBR saved to h200.sbr

- Transform binary SBR to (readable) text file:
# sbrtool.py parse h200.sbr h200.cfg

Modifying the hardware tag


Modify PID in line 9 (e.g using vi, vim or nano):
from this:
SubsysPID = 0x1f1c
to this:
SubsysPID = 0x1f1e

Important: if in the cfg file you find a line with:
SASAddr = 0xfffffffffffff
remove it!

- Save and close file.

- Build new SBR
# sbrtool.py build h200.cfg h200-int.sbr

- Write it back to card:
# lsirec 0000:05:00.0 writesbr h200-int.sbr

Device in MPT mode
Using I2C address 0x54
Using EEPROM type 1
Writing SBR...
SBR written from h200-int.sbr

- Reset the card:
# lsirec 0000:05:00.0 reset

Device in MPT mode
Resetting adapter...
IOC is RESET
IOC is READY

- Info the card:
# lsirec 0000:05:00.0 info

Trying unlock in MPT mode...
Device in MPT mode
Registers:
DOORBELL: 0x10000000
DIAG: 0x000000b0
DCR_I2C_SELECT: 0x80030a0c
DCR_SBR_SELECT: 0x2100001b
CHIP_I2C_PINS: 0x00000003
IOC is READY

-Rescan the card:
# lsirec 0000:05:00.0 rescan

Device in MPT mode
Removing PCI device...
Rescanning PCI bus...
PCI bus rescan complete.

- Verify new id (H200I):
# lspci -Dmmnn | grep LSI

0000:05:00.0 "Serial Attached SCSI controller [0107]" "LSI Logic / Symbios Logic [1000]" "SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] [0072]" -r03 "Dell [1028]" "PERC H200 Integrated [1f1e]"

Finally you can now shutdown and move the card to the 'dedicated internal' slot and your server will now boot once again!


Front hard drive bay:

With the H200 controller card installed exactly where I wanted it, I could utilise my front hard drive bays for the first time. My only problem? I only have two spare 1TB SATA drives, thankfully servers (and I'd imagine it's most of them) will happily take SATA drives in addition to their own native SAS drives. So I chucked in two drives to run a brief test with.

 


That second picture (above) is showing the two drives being detected upon boot by the H200, later however I decided to load up the configuration (see below) and disable the BIOS boot support (so it doesn't show the drives) - it saves boot time and its not necessary anyway, choosing the boot support option for 'OS only'.
 
 


Installing TrueNAS CORE:

I 'burnt' TrueNAS Core onto my USB with.. you've guessed it.. Rufus once again. Booted from a USB stick and just to remind you, my 2 x 8GB USB's are still in the post and haven't been delivered yet. So I'm testing using my 'fall back' method (An internal USB to SATA connector and a spare 2.5" HDD to act as the internal drive to me to install an operating system with). I'm sure that I don't need to show you the really simply installation process. It was my first time using TrueNAS and very simple to install and use.

Create a storage pool, I let TrueNAS analyse and recommend what to choose. Assign a user, enable your particular sharing protocol (in my Windows case it was SMB), mount the network drive and literally you're up and running. Of course there are a whole host of other, more detailed options but those are for another day. Today was just a test.

Mirrored network drive hosted on TrueNAS


Just a quick overview of the internals right now.. It's looking like this:



I'm afraid that's all I've got for you on this instalment, I'm now waiting for parts to arrive in the post (2x8GB memory, HDDs, HDD blanks) but don't worry.. I'll let you know and we'll continue this journey together! Thanks for reading ;-D updating soon!

TO BE CONTINUED..

Friday 23 April 2021

Solar power, small is surprisingly large!

I've dabbled myself with solar power in the past, yet I've never owned my own equipment nor have I done anything for myself. That all changed last week when a friend lent me a small solar panel, a charge controller and a dimmable LED light. The idea being that I could 'experiment' to find out for myself and it's really been a huge learning curve so thank-you for the lend!!

The main purpose of this blog entry really is to share what I've learnt and hopefully to get some of my readers to dabble themselves. Thinking about environmental reasons, thinking about self sustainability. It's really quite simple and I'd like to try to reassure you. You don't need a professional installer nor a qualified electrician. You can even do this with simple tools that you have around the home.

Originally I had grand ideas of buying two large solar panels and fitting them onto the roof. That idea in itself could still occur (in the future) but what I've learnt and would like to share with you all, is that you don't necessarily need to do that. 

The first step before you even start looking at items to purchase is to try and work out exactly what you would eventually like to run 'off-grid'. For me it is a radio, a light and then I hoped to also be able to run an internet signal booster. In addition to those, I kinda like the idea of being able to charge my mobile phone too. With everything considered, my power requirements are relatively low.

Then you need to buy/obtain some form of battery to be able to store the electricity in. I've another friend who had a spare car battery, which although that isn't idea.. it's a great place to start. Ideally you should be looking to get a deep cycle battery that has enough amp hours (Ah) to run your electrical equipment for a few days. I've seen 130Ah batteries new on eBay for around £80 and hope to upgrade to something like that in the future.

12v wiring can be done with quite light cabling, in fact there is no need to use heavier gauge cable as there is a thing called 'voltage drop'. Voltage drop is when an electrical current moves through a circuit, a small amount of voltage is lost due to resistance in the wires. This concept, known as voltage drop, leads to a slight production loss from your solar array.  When you go solar, one of the goals is to minimise voltage drop so that your system performs at peak efficiency. Basically using a heavier cable is not required and ideally you'd be using a thin pair of wires, of the size typically used for speakers.

Here's a few images of my system so far..

As you can see, I've used jump leads to connect a second battery to the first increasing my storage potential. This isn't necessary for my current setup but in my case, there is certainly no harm in me having it.

The second picture you will be able to see the dimmable LED light on the left, below that is the solar charge controller and to the right (screwed below the skylight) is a piece of MDF with two batons to prevent the MDF from warping from the heat of the sun. The solar panel itself is really quite small (50cm x 30cm), yet provides more than sufficient power for my 'current' needs!

Lets talk about the elephant in the room.. attaching a solar panel under a window or skylight. Sounds crazily inefficient doesn't it. Personally myself, I really was unsure if it would even work, but it really does and it saves a lot of hassle. External mounting of solar panels involves removing tiles/slates and acquiring the correct mounting brackets for whatever mounting system you intend to use. Even getting access to the roof could even be an issue? Additionally once the panels are externally mounted, you also have to consider that they will be exposed to the elements and although yes they are designed for that purpose, damage could still occur and maintenance may be required.

I mentioned before about running a radio, well I found (in my loft) an old camping TV/Radio that runs from batteries OR a detachable mains lead (with in-built, transformer plug, that drops the mains electricity from 220v to 12v). Simply cutting the transformer plug off the lead and connecting it straight into the charge controller enables this to run. I've powered on the TV on a couple of occasions, it really does draw a hell of a lot more power from the system but it does work, unfortunately it isn't of any benefit to me. It would also require a TV decoder box to be connected and running with an aerial to pick up any free stations. The use of this TV/Radio is only temporary as I'm hoping to get an old car stereo or maybe one of these Chinese MP3 radio head units in the future.

Thus, I have a light, I have radio and I can charge my phone from the USB inputs on the charge controller. Then I wanted to have an internet booster. A repeater of sorts, that would receive my wireless signal from the house and redistribute it. I already had an old Linksys WRT54G which was already flashed with the DD-WRT firmware. Basically these routers run from a 12v battery by using the same method as the TV/Radio. Cutting the transformer plug off and wiring them straight into the charge controller. The DD-WRT firmware basically allows you to have additional features, more options than the original manufacturers firmware, that the router shipped with. It's also compatible with many more makes and models of router than just Linksys. DD-WRT has a router database here if your interested in re-purposing an older device that you may have laying about? I then setup a 'Repeater Bridge' and you can follow the same instructions as I did to get that up and running from here.

You'll see the car speakers and the cage ready to home the radio's future replacement. I've also added in a switch to isolate all of the equipment housed in this make shift enclosure. For now when I want to turn off the router I unplug it (as seen!).

As the title of this post says, small (my solar panel) is surprisingly large (in terms of devices you can have running from it). So what's next for the future? I'm planning to add these :


Just for ease adding in other devices and to keep and to see the voltage easier (it does say the voltage on the charge controllers grey LCD display but that is kinda hard to read). Additionally for a little extra functionality I've already got a small inverter with three pin plug socket (150W) so I must connect that up as well.

I'll update this post when anything more gets added into this solar project. Thanks for reading.

Friday 11 December 2020

Graphics Cards Stock Levels

It's the 11th of December 2020.. Covid-19 has disrupted the world and in addition to that, there is a huge demand for PC components. Insufficient manufacturing (verses demand ratio) and poor distribution levels of the latest models of graphics cards from both of the graphics card giants. It's only made the situation worse. Then add in the extra Christmas demands too and you're probably seeing hardly ANY stock levels.

So that got me thinking about the second hand market. Of course you'll need to navigate a little more carefully, avoid the scalpers and bear in mind that you'll probably have no warranty. However if your willing to take that plunge (or at least entertain the idea) then you may find this post useful.

I've just been going through Passmark's high end video card benchmark site. I've collated them into an Excel spreadsheet (removing the mobile CPUs as laptops aren't applicable for this comparison). Then I painstakingly when through eBay (.co.uk) and attempted to get three of the cheapest (B)uy (I)t (N)ow - BIN prices for each graphics card and working out an average price for each card.

After all, these cards are available to buy today (secondhand). Once I had found out the average price per card I then divided the price by each cards benchmark performance score. This gave me a price/performance ratio (lower is better) that basically highlights some alternatives that you may not of already thought to consider.



If you want to download my spreadsheet file you can do so HERE. I hope it's as useful and insightful to you as it was for me. Thanks for reading my post, all the best for Christmas.

Thursday 8 October 2020

Project Blade (Network Server)

I've always been intrigued with the idea of buying a network server, I just don't know exactly why? Maybe it's the idea of having something that I've never owned before? Some computing hardware that I've been unintentionally neglecting? Maybe it's because I have a subconsciousness need for the benefits and feature that it could provide me with? Website hosting, home automation, Plex server, dedicated video processing unit.. the list goes on! Maybe it could be because I want to learn more about servers and there is no better way than owning one for yourself?

I had previously already snapped up a bargain and got my hands on 8 x 146GB 2.5" SAS (Network Server) hard drives for a few pounds (£3.04). However I was unsure exactly what to do with these as I had no way of accessing a SAS drive. Ironically SAS controllers can access SATA connectors but unfortunately not vice versa.

Additionally I had already made a mistake in the past. Buying some DDR3 ECC registered RAM for a desktop computer that I would be converting to a Xeon processor and hence, would of taken the cheaper ECC memory. In fact I should of been buying DDR3 ECC Unregistered RAM. I was completely unaware that there were two types of ECC memory and I just presumed ECC memory was all the same (i.e. intended for network servers) but hey, we all live and learn!

Whatever the reason was for wanting a network server when I saw this eBay listing I just couldn't resist!

So now I own a network server! This post will be dedicated to all about this network server and my trials and tribulations. I will update this post every time something new happens, so bear with me as this post could have frequent updates!

I guess I should start in telling you a little bit about this network server. It's a seventh generation HP Blade 460C. Full specifications of the 7th Gen or 'G7' can be found here:

https://support.hpe.com/hpesc/public/docDisplay?docId=emr_na-c02535780

Interestingly there is NO mention of Xeon E5530 CPU's in those official specs so these are probably an after thought and not a factory fit option. Spec wise the E5530's don't look that bad? I mean, I'd rather have these two CPU's fitted than having NO CPU's fitted! Potentially in the future I may upgrade the CPU's to the low powered, SIX cored L5640.

https://ark.intel.com/content/www/us/en/ark/products/37103/intel-xeon-processor-e5530-8m-cache-2-40-ghz-5-86-gt-s-intel-qpi.html

As you can see from the listing it didn't come with RAM, HDD's or HDD caddies. So because I already had the HDD's and the RAM in my parts bin, I just needed to get my hands on a pair of caddies. 

Once I had ordered the caddies, I started researching online about the 460C server. How was I going to power it on? There are plenty of server PSU's on eBay but which one would work? At the time I just didn't know. Now I do, virtually all of the server PSU's are all 12v (although you can mod them to get more voltage out of them, 13.8v is common but unnecessary for this project.).

The next problem was that there are none of the usual external connectors on the Blade server! Yes there is a unusual connection on the front of the server and the large square connect on the rear of the server (that mates with the official chassis), so I needed to do a little more research! The answer is that there is a HP dongle which connects into the front connection and gives two USB ports, a VGA adaptor and a serial port. The HP Part number for this is 409496-0001 / 416003-001 and you will need one in order to be able to do anything with your server.

Additionally I had no idea that the majority of network servers will NOT work outside of their official server cabinet. These server cabinets are namely very large, housing multiple servers and they cost thousands so getting one is totally NOT an option! Luckily the majority of servers will have a hidden selection of DIP switches and by researching your particular server model you should be able to find exactly which DIP switches you need to change to get your server working outside of the official server cabinet.

In the case of the 460C Blade server, changing DIPs numbered 1 and 3 from their OFF positions to their ON positions will do the trick. Here's an image of the location of these DIP switches on the G7 460C..



As you can see from the information panel on the right, they don't fully mention even half of the DIP functions! Obviously they don't want you to do this!! ROFL.

Another useful spot on that diagram is item #12, the on-board USB header. I have my own plans for that. Thinking ahead to when the server is fully setup, running headlessly somewhere in my home. I don't want to have to keep the HP dongle installed to get a network connection. I'd much more prefer to extend that internal USB header to the outside of the chassis. So that's exactly what I've done, here a few photos, let me walk you through it.

Firstly we need to open up the server and locate the internal USB header.






Buried under the drive bays, there it is, we've found it! So now I'm looking for an old donor USB cable and somewhere I have a circuit board to USB socket lead I salvaged out of a TV decoder box a few years ago..



With the two USB cables cut to length and soldered together I can decide where will be a good place to located the external USB connection.


I decided to install the USB connection just above the SD card slot. It's still low enough to not obstruct the drive bay area and looks in keeping with the other external opening. Before going any further it's time to mask up the server to protect it from any tiny particles of metal which will be generated as soon as the Dremel gets going.



All measured up and masked up, it was time to start grinding... sometime later...




Probably the hardest part of doing this was gluing the socket into place (Hot glue gun). It didn't go exactly where I wanted it to go (i.e.central) but hey, it's good enough for me! [Later this gets revised for a screwed in USB connection]


However the theory is perfect, once this server is fully configured, up and running and living on a shelf I won't have to leave the front HP dongle attached to access a USB slot. The USB to RJ45 network dongle is now attached to the internal USB header via my extension.

While I had the server out I remember (somewhere) having a banana plug lead which went to an XT connector, luckily for me I just found it. This will be part of the next segment of Project Blade, the PSU. Stay tuned until then!


UPDATE - 15/10/20 : So today I got a fan through the post, the dimensions of it are 100mm x 100mm x 15mm which I hope will be sufficient to extract the air through our server case? I had previously mocked up a folded piece of paper attempting to try to figure out what size of fan I could use. Noting motherboard obstructions and positioning. Here's the new fan..


Using electrical tape to 'guesstimate' it's future position..


However by doing this I discovered that I had a problem...


So looking at the problem in detail I could see that the highest points inside of the case were..


Causing the fan to be in a position which was slightly to high (vertically) for the case to close. Admitted I could of removed the battery holder which is located above the mezzanine card. Instead I decided it could be useful to support the fan should the silicon break(?), so I just bent the side of it over allowing sufficient clearance.


Therefore a little 'tinkering' was needed! I came to the conclusion that if I shaved off the four bottom corners of the fan, and I cut a full hole in the top of the case then the fan would 'drop down' into the hole and would remain slightly above the top of the case and thus giving me those extra few millimetres than I needed for clearance.


 


Just like that, the fan dropped into place! As if it was as easy as that!! Here's the profile views on the fan once it was finally in it's place. Pretty good I thought!


Now, it was just a case of holding the fan in it's place. I thought about drilling holes and using fan screws to hold the fan in place. However, an additional concern of mine was to seal the gaps around the fan as well. I had intended to use my hot glue gun to seal the fan in place but then I had a brain wave! If I was to use silicone instead, and if the fan ever needed replacing.. then the silicon would pull out in one single lump and also it would provide an airtight seal. So.. silicone was used!!


Admittedly it's not great (flashback to the USB port anybody?!) but you'll never see the underside and it'll certainly do the job. Also now there is no need for the fan screws either.

How is the server looking now you may ask? Here's a few pictures.


As you can see from the left picture, we have 3mm of fan clearance. This enables the top case to be fitted and removed as it was originally intended. The fan itself site 3mm above the top case, so again that's pretty good in itself.

The next things which need to be addressed are powering the fan, which is a 12V fan. I intend to utilise a second set of Banana leads (coming in the post) and plug in directly to the servers second set of Banana lead plugs.

The left pair of Banana plus will be for the fan, whilst the right pair of Banana plugs will be for the power supply (also in the post). Both sets of Banana plugs are wired together and into the motherboard which is also 12V so this should be as simple as that. Of course, we will find out in our next instalment! Thanks for reading thus far :-)

UPDATE - 15/11/20 : So welcome back my interested readers! What's been going on in my world of Project Blade since last month you may ask? Quite a bit actually! Let me catch you up..

I was poking around in my spares box when I came across an old fan guard, thinking about that logically.. yes it certainly would be a good idea to temporarily fit that to the servers newly mounted exhaust fan. Even if it would initially be secured only by Blue-tact! (later a few dabs of hot glue)


The power supply arrived, in the end I went for a 750W HP server power supply, HSTNS PL18 (which is available using any of these HP part numbers, 506821-001, 511778-001 or 506822-101) I got this for the bargain price of £7 (yes £7!).


In terms of getting the power supply to automatically switch itself on (once it has power) I referred to the internet of many things and in particular a RC forum who showed it can be as easy as soldering a wire from ports 33 to 36. Other internet sources have also shown soldering a 1000 Ohm resistor across those same ports however I didn't have a resistor so the wire worked well for me. After modifying the server power supply, this mean that as soon as the wall socket was plugged in and switch on then the server power supply would be live and would automatically power up. It will also stay live until it's switched off at the wall.

As you can see from my own shots, after soldering I used my hot glue gun to ensure the contacts remained securely in place and I also used a couple of cable tie bases to ensure that the wire strain doesn't dislodge them.


The 12V top mounted fan is connecting directly into the bullet connectors meaning if the power supply is live then the fan is on (even if the server is not) and with that I took the plunge and powered on the servers power supply.

At this point the server itself has an orange/amber light in the middle below the the two drive bays. If the server is left alone, then it will automatically turn green after a few minutes and then begin its boot process. However you can also press the amber light (which is actually a button) to get it to turn green instantly and begin the boot process a bit sooner. The boot process of this 7th generation Blade server will take at least three minutes and thirty seconds so don't be expecting a fast POST!


Bearing in mind that I had second hand SAS drives installed and I didn't know if they were working / defective / full of data or blank! I booted from a USB thumb drive and installed Windows 10.

It was at this point that I discovered that I had no internet connection. My SMC EzConnect RJ45 Network connection to USB proved itself to be obsolete. Apparently you cannot get a driver for it that is any newer than Windows XP! So with a newer Windows 10 compatible RJ45 to USB ordered from China, thanks again eBay we proceeded with a borrowed Wireless network adaptor borrowed from my Raspberry Pi.

I was interested to see what CrystalDiskMark would make of the SAS drives, I was kinda surprised at their overall speed. Bearing in mind, this was a single disk being tested, no RAID was setup at this point.


After that was done I installed Speedfan just so that I could get an idea of the internal temperatures and I am so glad that I did. Core temperatures were sitting in the high 70's and low 80's which was alarming.

I realised that I need to get the server 'buttoned up' so that the fan had to draw air in from the front of the chassis, through the hard drives, RAM and CPU's. For this I made a small wooden back-plate.


Then I basically used that silver parcel tape to cover all of the holes (yee-haw!). Airflow was better but it had only really dropped to the high 60's. The CPU itself has a stated maximum of 76 degrees so that wasn't allowing much headroom. Things just HAD to change!

So once again I raided my parts bin, hoping that I had some kind of adjustable fan controller? The thought being that if I could fit that to the server then I could hopefully turn the fan up and get it to draw more air through and ultimately lower the temperatures that I was seeing.


Above you will see that I had to alter the fan connections slightly, now the positive goes through the fan speed controlling dial and then onward to the fan. The result however wasn't what I had hoped for, I had hoped that the fan wouldn't of originally been running at 100% however it was. All the fan controller would do is to slow the fan slightly or stop the fan. So I needed to rethink my actual fan I was using.


I changed the base plate slightly and passed the fan wires directly through. I have a few smaller (and much noisier) fans but obviously I've already cut a fairly big hole so I also made a plastic template and had to use plenty of tape.. this server is turning into something primarily held together with silver tape! That will be addressed in the future!


I also found a BIOS setting 'Optimal cooling' under processor management which basically under clocks the CPU from 2.4GHz to 1.6GHz. With that change made in the BIOS and the newer higher RPM fan installed my core temperatures are in the high 40's and low 50's which is much better. The negative to this is that now it SOUNDS like I own a server!! This is something else that will need refining, but for now I'm happy to continue on this journey.

I went through the seven 2.5" 136GB SAS hard drives, checking them, seeing if anything was installed and basically attempting to understand how the P410i controller card worked. One of the hard drives was dead, but the other six were working fine. All were empty and all had a variety of different amounts of logical drives assigned. I was confused.


Let me share the knowledge that I've learnt with you. Even if a SAS hard drive is empty your RAID card retains the information relating to the previous hard drive array that was installed. Thus in the photo above you will see that one of those drives, once installed singularly, was originally part of 6 logical drives. To resolve this basically it's a case of pressing F8 and deleting all of the previous logical drive settings.

I realised this a little bit too late (I had already installed Windows 10) and I certainly didn't want to reinstall Windows 10. So I circumvented that process by using Macrium Reflect Free to take a image of the windows partition and saved it external to a removable hard drive. Then when I inserted two cleaned SAS hard drive and booted creating a RAID 1+0 array (striping for slightly increased performance and duplication for data integrity) I could image the windows partition back across and I didn't lose any data. It was quite quick as well taking under 12 minutes.

Comparing the CrystalDiskMark's earlier scores I re-ran the benchmark to see what kind of performance boost I had received from creating the logical drive and RAID array.. I nearly fell off my chair!


In terms of read speeds, initially it was 80.07 and went up to 134.24 (a 168% increase). Initial write speed of 78.76 went up to 111.12 (141% increase). WOW!! definitely worth doing :-)

Whilst I was doing all of this I came across a rather annoying glitch, pictured below. I don't think it was anything associated with what I was doing, just more of a coincidence really I think?


1783-Slot 0 Array controller is in a lock-up State due to a hardware configuration failure. (Controller is disabled until this problem is resolved)

Luckily this isn't as quite as serious as it may sound, it's just a case of opening up the server and going under the drive bays at the front to remove the RAID controller card for 10 minutes or so, additionally unplugging the battery from the RAID controller card as well. 

Whilst I was back under the hood of the server, I had be meaning to address my hot glued USB side port which had fallen off internally. Whoever said using hot glue would of been a good idea?! hahaha!


Back in Windows 10 and it was time to take a quick look at the system memory. If you remember I had previously (incorrectly) purchased some EEC registered RAM and never before been able to use it. Here's what CPUz says about the memory..


What else was left to do. Now that the cooling was a lot better I wanted to run Cinebench and stress the CPU's and see what kind of a result this server is cable of getting. I went back into the BIOS and removed the underclock so that the CPUs were no longer restricted to 1.6GHz.

I fired up Cinebench and ran it twice. The best score I got was 790 and to be honest, I didn't really know if this was a good thing or a bad thing! I also took a quick note of the power draw whilst the server was under full load (194W). The full load Speedfan Core temps maxed out at 61 degrees, perfect I thought.


That was pretty much all of the system related checks over with. Now what am I actually going to do with this server?! I decided to install Ubuntu Server on one of the spare hard drives. Unfortunately I didn't really get any further than downloading the image, installing the image onto USB and making it bootable. Once inside my new Ubuntu Server operating system I discovered that the Raspberry Pi's Wireless dongle is rather problematic to get working. I decided to wait until my wired connector arrives.

In the meantime I will continue to use Windows 10 and just experiment with a few game servers. My son has already installed a Minecraft server. He says that "it can be a little laggy for a minute or so until that clears and then its fine again for an hour or so". I haven't looked into this yet because it running far from optimally. I'm guessing that lag should hopefully be gone once the server has been hard wired. 

So now is a good time to pause this update and prepare you for the future adventures of Project Blade!

- 24GB of EEC Registered DDR3 has been ordered, although pictured in that auction was 48GB so I'm keeping my fingers crossed that the latter amount will arrive!!
- 2 x Intel Xeon L5640 (SLBV8) 2.26GHz 6-Core CPU's have been ordered. 
- a 3 Port Mini Hub USB 2.0 has been ordered, the thinking here is for when the server is running headlessly it will be handy to have a few more USB ports.

QUICK UPDATE - 25/11/20 : 


As it turned out, I was right and bagged myself 48GB of DDR3 ECC memory (instead of the advertised 24GB - so that was a fantastic WIN!) especially when the cost was €25 including shipping!!

Memory went in without a hitch, you just need to remember that each CPU has three channel memory, the closest two slots are Channel 1, the furthest away pair of slots are Channel 3 therefore the two slots left in between are Channel 2.

Each of my Processors now has 32GB of memory that it can access (2 CPU's = 64GB RAM)

After the memory was correctly installed I then decided to tackle the new CPU's (I advise against installing all of your new parts (CPU and RAM) at once in terms of troubleshooting, just do them one at a time). So the heat-sinks came off in double quick time, old quad core'd E5530 CPU's came out. Newer six core'd L5640 (Low Power) CPU's went in, a quick cleanup and thermal paste and it was time to switch it on again!


Excuse the Blue hue to that photo, I had swapped over VGA leads and that particular lead IS that bad unfortunately. Still, not to worry as the server will be running headlessly from now on. It's up and running with Windows 10 (until my newer RJ45 to USB adaptor arrives) then we'll be going down the Ubuntu route.

Just for a quick comparison I fired up Cinebench and hoped to do a like for like benchmark to compare with my 1st Cinebench. I was actually doing this offline with no internet connect (my son had 'borrowed' my WiFi dongle) and I just clicked on the search icon, typed "Cinebench" and then clicked it. It wasn't until I ran both benchmarks that I realised that it was a completely difference version of Cinebench!! - I will run the exact same version of Cinebench that I did earlier for a better comparison next time I fire up the server. Until then you'll just have to make do with the results from this particular version of Cinebench!


Excuse the quality of the above image, I screen grabbed it from the server running VNC, hence why I've written the 'points figures' in case you can't make them out!

So, whats coming up in the future?

- Wattage from the plug test utilising the correct version of Cinebench for a better comparison.
- Hard wired USB to RJ45 dongle.
- The search for faster network connectivity? 
- Sharing the partial knowledge of the rear 100 pin Molex connect's pin outs.
- Sharing the partial knowledge of the mezzanine connectors pin outs.
- Possibly even attempting to make a mezzanine to PCI-E x1 connector (with the hope of being able to run a PCI-E x1 Network adaptor to get faster LAN speeds)



MINI UPDATE 18/02/21 :

All my opened phone browser tabs got unintentionally closed when I some how, managed to download 'Barcode Scanner' which was actually a pop up browser advert malware piece of s**t. So I'm going to need to re-research the four or five tabs I had open with the mezzanine information. It's a little annoying but really I should of posted this information and then I wouldn't of lost it, so I've only got myself to blame. Thought for the day, "Don't put off doing something today!".

As for the Blade, it's up and running daily. My son and his buddy's are on it (aka Minecraft server). I've been looking into custom building a 4U rack server but I really cannot find anything in a similar price range to this Blade. 64GB of  new DDR3 or DDR4 memory is about the total price of Project Blade, let alone the rest of the components that I would need!

In terms of the Blade's exhaust fan, I've had just about enough of the noise from the 92mm fan. I managed to disassemble the server this morning before my son switched it on. I wanted to get the part numbers to research the fan that's currently in use, along with the others I've had previously installed. Then using this information, I plan to research and purchase a better exhaust fan. Here are my previous fans.

Current : Delta Electronics AUB0912VH, CFM 67.9, 45dB, 3800 RPM, 92mm 
(31-37 degrees across all cores idle)

Originally : PC Cooler, CFM 22.5, 22dB, 2000 RPM, 110mm.
(I forget exactly, but it was mid 60's under load)

Experimented using : Cooler Master A12025-12CB-3BN-F1, CFM 44.03, 19.8dB, 1200 RPM, 120mm.
(This was initially looking promising, but again mid 60's under load)

So finally I had some actual data to act upon. No more trial and error with fans! The original fan size (110mm) was chosen to be the largest that I could possibly fit into the Blade housing. Unfortunately it was never going to meet the Blades cooling requirements. I had some hope for the Cooler Master fan, after all it was a bigger dimension fan. However it looked a little bit 'gawkie' perched on top of the Blade. It was no way near as nice looking as the thin 110mm original fan. 

I remembered the phrase 'functionality over looks' and thought, well if it does the job.. then I'll stick with it. Unfortunately it too was never going to do the job required. So the Cooler Master was uninstalled and the 92mm Delta Electronic is back on the Blade again.

My online search for a better fan. Ironically fans between the sizes of 80mm and 120mm are kinda a bit niche. So I though I'd bite the bullet and go for a 'gawkie' looking 120mm fan plonked on the top of the Blade server. I wanted a minimum of CFM 70+, with a noise level 20-25dB. so I've just ordered this :



Can ya tell what it is yet?! - It's a Noctua NF-P12-Redux, it's basic specs are CFM 120.2, 25dB, 1700 RPM, 120mm. So this thing should be fantastic, I'll fill you in once it arrives.

In the next update, I'll talk you through the three USB to RJ45 adaptors I've purchased from China, which one I prefer and would recommend.



USB RJ-45 ADAPTOR SHOOTOUT - 24/03/21 :

Admittedly I'm getting a bit lazy now with this project which is a shame because the server is running pretty much 24/7 and it's quiet. It's actually quieter than my D-Link 48 port Gigabit switch! It's doing everything that I'm asking it to do without any drama. It's really fantastic! However as I'm getting tied up with other projects, subsequently I'm neglecting Project Blade.. so tonight Matthew.. I pulled my finger out and done the necessary with the USB network adaptors!




Now lets not get ahead of our selves here!! I'm no rich You-Tuber flaunting his money. I purchased three USB to RJ45 adaptors, mainly because the first one was absolutely useless (you'll see later) and so when buying the second one, I also purchased the third one as the same time. So without further a-do, here's my adaptors in detail.


ADAPTER #1 - AVERAGE = P9, 5.03 DL + 5.49 UP :





The infamous first USB RJ45 adaptor that I purchased. I didn't really expect much but equally I certainly didn't expect the speeds that I got. Still, at least it was working and that in itself was a start. 

Installation was a sinch, the USB adaptor also shows up as a CD-ROM drive, so simply installing the driver from the inbuilt CD-ROM and voila! Quick and easy.

I did worry myself about the maximum speed of USB 2.0 thou, and here's what Google says:

USB 2.0 provides a maximum throughput of up to 480Mbps. When coupled with a 1000Mbps Gigabit Ethernet adapter, a USB 2.0 enabled computer will deliver approximately up to twice the network speed when connected to a Gigabit Ethernet network as compared to the same computer using a 100Mbps Fast Ethernet port.

Thus, I was thinking... WTF!! Second and third adaptors ordered ;-)



ADAPTER #2 - AVERAGE = P8, 64.61 DL + 28.87 UP :





The second adaptor was chosen because although it was still USB 2.0 it also incorporated a three port USB hub which can be optionally powered to support power hungry devices. 

I wasn't really worried about a USB 3.x device because I already knew that my Blade server doesn't have USB3 so I wasn't really looking for anything more than USB 2.0.

This second adaptor also uses the newer USB-C connector as it's main connector, although my apologies, it's hard to see on my own photo.

Installation of this device truly was plug and play, no device drivers were installed by me (possibly Windows 10 may of done that in the background? But if it did there were no notifications and it was up and running in seconds). I was much more happier using this network adaptor!!



ADAPTER #3 - AVERAGE = P8, 112.13 DL + 28.98 UP :





The final adaptor and well.. here is a turn out for the books. I would of never guessed that a USB 3.0 device would of out preformed a USB 2.0 device by so much. Especially when it's running from a USB 2.0 port! WOW!! Some difference.

Installation wasn't exactly plug and play, more like plug and prey.. then nothing, finally, looking in the packaging was one of those tiny CD-ROMs and after copying the driver across, success.. got it working and boy was it worth it!

Suffice to say this IS the network adaptor that is STILL plugged into the Blade server ;-)

 
So what's next?

To finish up this project I'm going to re-research all that information that I had lost from my phone, for the possibility of hooking up a PCI-E x1 network adaptor into the Mezzanine connector inside the Blade server. I feel that I owe it to this write up, the Blade server has justified itself with it's exemplary manner. Really I am surprised how good these things are for the money. To balance that comment out, yes their onboard GPU is poor and of course, they don't have any native network ports, which is problematic.

In addition, I've got this idea of rack mounting my Blade server. You'd of thought this would be easy, however the exhaust is now coming out of the top of the Blade and the Blade server itself is higher than a 1U slot and a little lower than a 2U slot so it will have to go into a 2U shelf. Maybe I could design this shelf to house two Blade servers? Let me know below in the comments what you think?


More to come (hopefully soon)..