Posts: 7,970
Threads: 554
Joined: Mar 2005
09-17-2016, 08:38 AM
(This post was last modified: 09-17-2016, 08:43 AM by IVB.)
I've been getting taught by JKMonroe about virtualization over text, but I'll switch to forums as I A)have too many questions for text, and B)others may benefit.
My problem:
1) Too many apps are fighting with each other for CPU, machines get unresponsive, CQC gets flaky. I don't want to add more machines.
2) I think boot disk on mega-array server is dying, got one BSOD and random errors requiring reboot increasing.
3) RocketRAID card LONG past warranty date
The hardware I have:
Server 1:
Specs: i7-4770/32GB (soon, first i7 was DOA, as of this minute its an i3). LGA1150 mobo, 2.5 years old.
Server 2:
Specs: i5-3570/16GB (quad-core but no hyperthreading, 4 threads). LGA1155 mobo, 4 years old
RocketRAID card with 5x4TB RAID5. card is 7 years old, drives all replaced 2.8 years ago.
Major Apps
Ones that can run on any machine
CQC: low ram, low cpu
PlayOn: high cpu, low ram. Not used often but when it is, it takes 50% of a hyperthreaded i3 dual core
SQL*Server
BlueIris: typically 50-70% of an i3
Ones that I want on an easy-to-access machine
Adobe Photoshop & Lightroom: low cpu but 5-7GB of RAM
Plex: high cpu, medium ram. Transcoding one BluRay takes 98% of the i5-3550. Moving to all BluRay, I need to be able to stream at least 2 at once.
Future architecture (guessing)
First floor server: Use the i7/32GB, Win8, directly run Adobe & Plex
Basement (accessible via crawlspace): VM Server (maybe). Get at least a quadcore hyper threaded as
- PlayOn & BlueIris each use 50-70% of the i3/hyper so give them 3 cores
- CQC & SQL*Server could get 2 cores
Steps (I think hence this post)
1) Put in the i7 whenever newegg ships me one that works
2) Does a VM server running PlayOn/BI/CQC/SS need more than 16GB? Looks like no more 32GB sets supported by that mobo are sold. If so, then get the i7-3770K. Thats $450 though. If it won't work, i'll look into dual xeon machines so I have headroom. Thats over $1K in parts it seems.
3) Get a Synology with 3x6TB & copy all the data off the array.
4) Migrate Plex from basement i5 to main floor i7 (once installed).
Questions
1) See above about 16GB for a VM Server. Do I put $450 into a 4 year old mobo, or is it worth it to spend twice as much on a new box? (No desire to buy new unless I absolutely have to. If I get a new one, I might as well get a dual xeon so there's headroom, but man that sounds like a lot of work)
2) On a VM Server, JKMonroe said I should install VMWare on the boot disk. But normally mobo's come with windows driver disks for LAN, graphics, etc. Is this not done at this stage using VMWare?
3) How do I load up a VM with windows on it? When/where do I create that VM?
I have time to putz now but final pitch to new client on Wednesday, may only have 2 weeks left on sabbatical. I'd like to get this and many other house projects done before then.
Posts: 3,716
Threads: 196
Joined: Aug 2006
Synology - 3x6TB, SHR for 12TB.
Server 1 - i7/32GB
Server 2 - i5/16GB
Switch - Managed EdgeSwitch
Create LAG group on 3 LAN ports on Synology (with static IP) and configure LAG group on 3 corresponding ports on EdgeSwitch. 1:1 the 4th LAN port (also static IP).
Decide how many total machines you would like to have -
(1) CQC, PlayON, Blue Iris
(2) SQL Server
(3) Plex
(4) PhotoShop/Lightroom
Configure a 800GB iSCSI LUN on Synology. Configure MPIO for LUN on Synology.
Install and configure VMWare ESXi on each machine.
Attach iSCSI LUN to each server via vSphere as a datastore.
Attach Plex and Photoshop/Lightroom on the i7 and attach CQC and SQL to the i5.
Configure the remaining 11.2TB however you'd like, but most likely as a single volume file share.
So how do you get your existing machines into a virtual setup? p2v, baby! you can run the p2v tool against your existing machines and just turn them into virtual copies of themselves. Easy!
do the needful ...
Hue | Sonos | Harmony | Elk M1G // Netatmo / Brultech
Posts: 7,970
Threads: 554
Joined: Mar 2005
I want to use the correct # of machines to return my system to stability, even if I have to buy more OS licenses. These new apps have disrupted stuff too much.
Questions:
1) PlayOn and BlueIris (for me) both consume a ton of CPU, is it smart to put them in the same VM? Also, should I put CQC & SQLServer together since they're both HA related, so I have just one "HomeAutomation" VM?
2) For 3(Plex) and 4(Adobe), is there really a need to put them into VMs? Adobe barely touches CPU, and Plex barely touches RAM, I was considering not virtualizing the i7 box.
3) The i5 is non-hyper threaded so only 4 cores, it'll fall down if I try to run BI & PlayOn on it. When i've tried running on my i3/hyper (hence 4 cores) the whole machine hangs. Should I spend $450 to upgrade a 4 year old mobo to an i7/hyper (8 cores) knowing I can never get more than 16GB of RAM (no supported memory off the mobo list is sold anymore)? Or spend 2x that to get a dual xeon setup?
4) i'm still not mentally comprehending my Q1&Q2 above about how I load a VM onto a server. Does "iSCSI LUN via iSphere" mean the VMWare ESXi will be able to see the Synology and load VMs that I've previously created? (Also I don't need to p2v the existing machines as i'm totally changing the config, unless you're saying to do that because its simpler than creating & formatting a new VM from scratch)
Posts: 1,592
Threads: 115
Joined: Dec 2006
A) as a recent convert, VM's are great, went from 3 24/7 servers running lots o stuff, a bunch of NAS's down 1 unraid running everything (and one just for backup)
B) have you looked at unraid recently? as of v6 they are all about being VM'y... and dockery... dockers are great... Plex & Sage are dockers, I can spin up a new instance of either with a just a click... they even make setting up a an actual VM very easy... wonder if they have a docker for BI yet? oh well, anyway, just one more option to consider...
NOTE: As one wise professional something once stated, I am ignorant & childish, with a mindset comparable to 9/11 troofers and wackjob conspiracy theorists. so don't take anything I say as advice...
Posts: 7,970
Threads: 554
Joined: Mar 2005
I have looked at unRaid, a lot actually. And you're right, at the same price point I could have a WAY nicer box. But Synology has done all the legwork researching components that work together, link aggregation, cooling, hotswap. And from what I read, the power requirements are minimal. If the Synology box dies I can plop the SHR into another Synology.
I have a 12 bay case & power supply, but i'd need to identify what mobo, cpu, then config to make it go. Then I'd have to do that. I'm guessing the parts alone would be a few hundred. For $850, Synology sells an 8 bay hotswap unit. This is their business and they're very respected, so i'm sure they've designed a solid box.
Posts: 1,592
Threads: 115
Joined: Dec 2006
or for ~$300 - $600 you could get a refurbed Sumermicro...
2U Supermicro 12 Bay FreeNAS SAS2 Server X8DTN+ 2x Xeon 6 CORE JBOD LSI 9211-8i $500
[URL="http://www.ebay.com/itm/2U-Supermicro-12-Bay-FreeNAS-SAS2-Server-X8DTN-2x-Xeon-6-CORE-JBOD-LSI-9211-8i-/152236083959?hash=item2371fa4af7:g:~SQAAOSw5ClXxyjM"]FreeNas UNRAID JBOD HBA 2U 8 Bay SAS2 Server 2x Xeon Quad Core 2.4Ghz 2x PS Rail
[/URL] $350, but only SAS1 backplane... need to change that...
personally I went with a SUPERMICRO 846E16-R1200B chassis, and supplied my own parts... 24 drive bays, SAS2.... there is a SAS3 backplane available, and once the $$$ reaches a reasonable level I may upgrade, but for now SAS2 is just fine...
Supermicro servers have a bit of reputation too ya know... and it is not all bad...
NOTE: As one wise professional something once stated, I am ignorant & childish, with a mindset comparable to 9/11 troofers and wackjob conspiracy theorists. so don't take anything I say as advice...
Posts: 7,970
Threads: 554
Joined: Mar 2005
ok thinking out loud here.
That has 2x1200W power supply (and cap out at 4TB drives). Whats your power consumption like? I'm at $0.399/KwH, that thing sounds like the power bill would spike compared to the 25W for Synology at rest/45W at access.
Thats $500 but its used. Plus $130 for unRAID. I'd need 5x4TB at $250/each, so $1250.
To effectively compare, its
$450 - i7 (8 cores vs tthat is 12)
$850 - 1815 Synology
$1200 - 4x6TB at $300/each (technically a little more space, 18TB vs 16TB)
$2500 vs $1880.
DIY pros:
- $600 cheaper
- 4 more cores
Synology pros:
- Likely lower power bill in future as I can use big drives
- New Synology = longer NAS lifespan
- Prebuilt, i'll save (5? 10?) hours building and configuring it.
- Abstract NAS from VM Server = easier to upgrade just horsepower in future
Did I miss any pros of either?
Posts: 4,225
Threads: 365
Joined: May 2005
VM's are the way to go.
You can install ESXi on a USB and boot from that and keep your primary drives for the datastore.
Some recomendations from control system manufacturers when using VMs:-
Separate HDD for each guest. - this may be overkill for some guests like CQC but if you have a guest that is using the HDD a lot (Blue Iris) then maybe a separate disk in the datastore for it may be warranted.
Drive Redundancy. If running separate disks think about mirroring.
If you run multiple guests on the same datastore drives then think about raid5 or even better raid6.
Unraid is awesome - I'm still on V5beta in my ESXi 24bay system. There are a lot of good threads on the simplemachines forum on how to virtualise unraid and how to use pass-through etc.
As for your flakey HDD - I highly recommend spinrite from grc.com. I have used it many times with success and this is the exact problem that it was built to solve.
I reckon the dual xeon would be a better bet for the hypervisor - especially for a CPU intensive app like blue-iris.
Don't forget to enable virtualization support int he bios
There are so many benefits that come along with virtualization.
Rollback is dead easy with snapshots.
Cloning is a great feature
Portability. If one of your esxi servers craps itself move the guest to the other one and boot it.
Testing new apps in a new clean OS and not affecting anything else.
If you want to go high spec look into high availability so that both our servers have the same guest but one only runs and if it stops the other takes over in minutes.
We are about to move our multi server control system from ABB onto a virtualised platform (ESXi) which is going to reduce the [physical] server count from 8 down to 2.
Mykel Koblenz
Illawarra Smart Home
Posts: 7,970
Threads: 554
Joined: Mar 2005
znelbok Wrote:You can install ESXi on a USB and boot from that and keep your primary drives for the datastore. Oh interesting. I think I read that life expectancy is also in the 5+ year timeframe if you use as such, so I could just buy a new 4GB stick right? I think ESXi is pretty small.
Quote:Some recomendations from control system manufacturers when using VMs:-
Separate HDD for each guest. - this may be overkill for some guests like CQC but if you have a guest that is using the HDD a lot (Blue Iris) then maybe a separate disk in the datastore for it may be warranted.
Actually that brings up a new question: I think you're saying I could:
- Use the Synology (or SuperMicro) for the big media stuff, point plex/whatever at that
- Use a drive in the same physical machine as the host server for BlueIris (not backed up since I don't care about that)
- ESXi server setup is where I'd say "this is where the various data pools reside".
Do I assign drive letters or network shares inside the ESXi server and then the VMs see that drive letter?
Posts: 3,716
Threads: 196
Joined: Aug 2006
IVB Wrote:Oh interesting. I think I read that life expectancy is also in the 5+ year timeframe if you use as such, so I could just buy a new 4GB stick right? I think ESXi is pretty small.
Actually that brings up a new question: I think you're saying I could:
- Use the Synology (or SuperMicro) for the big media stuff, point plex/whatever at that
- Use a drive in the same physical machine as the host server for BlueIris (not backed up since I don't care about that)
- ESXi server setup is where I'd say "this is where the various data pools reside".
Do I assign drive letters or network shares inside the ESXi server and then the VMs see that drive letter?
with virtualization you need to move away from the concepts of 'disks' and 'drive letters'.
with the Synology, you will have a storage pool which you can divvy up any way you see fit. with VMWare, as long as you have an iSCSI LUN configured on your Synology you can attach it as a datastore. VMWare doesn't know or care if a drive is physically inside it's box or on the network somewhere, it's simply a datastore.
Synology = storage pool
Hosts = ESXi machines which access the storage pool
Dell ships some of their higher end models with mirrored Compact Flash cards to run VMWare. it's amazingly lightweight.
do the needful ...
Hue | Sonos | Harmony | Elk M1G // Netatmo / Brultech
|