Posts: 40,483
Threads: 491
Joined: Aug 2002
They will have to take some, if they are started, since they are running an OS and it does periodic house keeping, even if not running any user stuff. If they are running any anti-virus stuff that could be scanning and so forth.
If they aren't actually started, then they are just space on the drive, however much you assign each VM.
Dean Roddey
Explorans limites defectum
Posts: 3,716
Threads: 196
Joined: Aug 2006
yeah, you can do that.
your i5 would have 4 physical cores and your i7 would have 8 logical cores.
BUT, with two machines, youd have to be careful where you load them.
do the needful ...
Hue | Sonos | Harmony | Elk M1G // Netatmo / Brultech
Posts: 1,592
Threads: 115
Joined: Dec 2006
IVB Wrote:Stuff like that is the reason to keep cpu/RAM and data separate. and I still am not following you...
perhaps if you could type it slower?
the whole reason unRAID went off down the VM road in the first place is because unraid itself does not need CPU/Memory/etc resources... so they where just laying around idle... wasting electricity...
my 1st unraid/non-VMy setup was just a little pooper d525 atom...(same board my cqc MS was on till the infamous night of the SSD crash...) no HP, no RAM to speak of (2GB technically, it was the least amount available at the time...) and just like CQC, the CPU never did more than idle its way through life...
now, at that same point in time, the more adventurous VM'y type people were loading up unRAID as a VM on their super servers/vmware/whatever... which is probably where the unraid guys got the idea to make unraid a VM host...
I just don't see the down side to having everything in one box?
separate boxes = double the parasitic loses
my xeon is a bit overkill for the VM's I am running... the only time I got >40% cpu utilization was while having driver issues with my windows/CQC VM... most times a CPU core may peak at ~40% but over all I am <10%
the big question you should be asking yourself is what kind of storage do you need?
both RAID and unRAID have their pro's and con's...
RAID is faster, especially writes... but drives should match, running non-enterprise drives can be problematic (stupid WD green drives:-x )
even the live expansion (when supported) is kind of slow and clunky...
all drives spin up (all the time, unless using a consumer oriented NAS, those let you sleep the drives, but still, it is all or nothing)
unRAID has slow writes... unless you use a Cache SSD (able to select on a per 'share" basis) but still the cache concept is kind of cheesy...
only need to spin up the drive that contains the data you want to read...
expanding the array is as simple as sliding in a HDD of any size smaller than your parity drive (my case supports Hot swap/plugging, that's so cool;-) )
as there is no striping, if something very bad ever happened, I can always pull any individual drive and read off whatever contents may have survived...
NOTE: As one wise professional something once stated, I am ignorant & childish, with a mindset comparable to 9/11 troofers and wackjob conspiracy theorists. so don't take anything I say as advice...
Posts: 7,970
Threads: 554
Joined: Mar 2005
The reason to separate is that right now as we speak needs are growing as I'm realizing how something works. I'm going from "4-8 cores enough" to "8-12 cores min once I get it working". If I had one box I'd have to get another mobo, take down the array while I move it. With separate boxes my families access to media, and hope like hell I don't foobar the install. With two there is no risk to the media.
Posts: 7,970
Threads: 554
Joined: Mar 2005
jkmonroe Wrote:yeah, you can do that.
your i5 would have 4 physical cores and your i7 would have 8 logical cores.
BUT, with two machines, youd have to be careful where you load them.
Is this because the free version requires me to specify what runs where?
Also how portable is the config assuming its on a USB stick? 8 core dual xeons R710 /64GB is only 550 on ebay, should I set up on the i5 then buy that and move the stick or do initial setup on the larger cores?
Posts: 3,716
Threads: 196
Joined: Aug 2006
well, it's all about the architecture. you have 2 physical machines that can each run ESXi, and each connect into the Synology. so you can pick and choose what virtual machines run on what specific hardware.
you don't need to worry about config - machine specific configurations are contained in a VMX file that will live on the Synology. all that you need to do is install ESXi, attach the datastore, and load the machine. so if you buy something new you would just load ESX. it takes 15 minutes.
there are loads of new CPUs out there, too. once you head down this route you can start trying to build based on usage - maybe get the octa-core Avoton board ($400) to run CQC/PlayON/BI and then the Xeon D-1540 quad board ($300) to run PS/Lightroom/Plex.
do the needful ...
Hue | Sonos | Harmony | Elk M1G // Netatmo / Brultech
Posts: 40,483
Threads: 491
Joined: Aug 2002
Don't forget I/O in all of this. Since the big purpose of the CQC MS, for end users as opposed to admins doing config, is serving up data. No matter how many CPUs you have, there's still only one I/O bus, and I don't think any of these VMs allow you to reserve I/O bandwidth do they? So, if the machine is being used to stream or process or rip high res media, and there's multiple VMs doing it at once, it's possible that could slow down the response of the MS to clients, even if it has its own dedicated CPUs.
And admittedly the I/O architecture these days is pretty crazy fast, so I don't know where that 'might be a problem' point shows up. But it's something to consider.
Dean Roddey
Explorans limites defectum
Posts: 7,970
Threads: 554
Joined: Mar 2005
Good point. If SWL's mega-server doesn't have an issue, I doubt mine will. The host for the CQC MS will only have PlayOn (record NEtflix/Hulu), BI (cameras), SQLServer(temp logging) on it. That should be trivial.
Posts: 1,592
Threads: 115
Joined: Dec 2006
IVB Wrote:The reason to separate is that right now as we speak needs are growing as I'm realizing how something works. I'm going from "4-8 cores enough" to "8-12 cores min once I get it working". If I had one box I'd have to get another mobo, take down the array while I move it. With separate boxes my families access to media, and hope like hell I don't foobar the install. With two there is no risk to the media. I am still not following, type even slower...
so you take down your plex/sage/etc VM box, what good are the files on the storage without something to sort/play them? either way, you are down and people will yell at you...
as for moving, A) why move? at least right away, if you determine that you need more horsepower, more VM's, temporarily spin up another server... add new VM's to the new server, sort everything out, and then if new server has enough HP start moving VM's from old server to new server in a slow and controlled process making sure each VM works and is happy before moving on to the next VM... doesn't matter if one of the VM's happens to be your storage
easy peasy...
that is one of the true joys of VM's...
or if you determine that old server actually does have enough HP for the new VM's them move them there and turn off new server...
either way, still easy peasy...
or just start with multiple servers that you have laying around, and then once everything works, calculate HP and move them all onto the one server that has enough HP...
VM's are cool...
and as far as moving storage goes, it has been a long time since I dealt with real RAID, but I vaguly remember HW raid being a scary PITA... one wrong move and everything goes up in a poof of virtual cloudiness... so don't do that...
but something like unRAID where it is just a glorified JBOD is so much simpler...
on the unraid boot flash there is a config file... in that config file it lists all its drives by serial number and what logical "slot" they are installed in... when it boots, it looks for all your drives, when it finds them it cross reference thier serial number to where it thinks they should go... and done... no input required by you the person...
if some how that config file goes away (never happened to me, but hey these things could happen? maybe?) so what? you start from scratch, but so what? all your data is still there, you will need to rerun parity, but big whoop-de-do...
my SAS2 backplane seems to like to reassign physical addresses every time it reboots, unraid just doesn't care... try that with real HW raid :-) go rearrange your drives... see how that works out for you...
I moved my parity and sage recording drives from the SAS backplane to the motherboard to try and get a little better bandwidth... 0 config needed for unraid... it just found their new physical addresses on a completely different bus/interface and went on with its day...
NOTE: As one wise professional something once stated, I am ignorant & childish, with a mindset comparable to 9/11 troofers and wackjob conspiracy theorists. so don't take anything I say as advice...
Posts: 7,970
Threads: 554
Joined: Mar 2005
SomeWhatLost Wrote:I am still not following, type even slower...
so you take down your plex/sage/etc VM box, what good are the files on the storage without something to sort/play them? either way, you are down and people will yell at you... From what I'm learning, I can pull up the VMs very quickly on another host. As in, within minutes.
Quote:as for moving, A) why move? at least right away, if you determine that you need more horsepower, more VM's, temporarily spin up another server... add new VM's to the new server, sort everything out, and then if new server has enough HP start moving VM's from old server to new server in a slow and controlled process making sure each VM works and is happy before moving on to the next VM... doesn't matter if one of the VM's happens to be your storage
easy peasy...
that is one of the true joys of VM's...
or if you determine that old server actually does have enough HP for the new VM's them move them there and turn off new server...
either way, still easy peasy...
or just start with multiple servers that you have laying around, and then once everything works, calculate HP and move them all onto the one server that has enough HP...
VM's are cool...
If I tell you what I think you're telling me:
- Build a megaserver for both NAS & cpu/RAM
- if I need more power, add another cpu/ram only server.
Thing is, in that world I still need the older megaserver. Power costs are a big deal for me, I'd rather retire the old box. An extra 120W server costs $35/month just in electricity.
Quote:and as far as moving storage goes, it has been a long time since I dealt with real RAID, but I vaguly remember HW raid being a scary PITA... one wrong move and everything goes up in a poof of virtual cloudiness... so don't do that...
but something like unRAID where it is just a glorified JBOD is so much simpler...
on the unraid boot flash there is a config file... in that config file it lists all its drives by serial number and what logical "slot" they are installed in... when it boots, it looks for all your drives, when it finds them it cross reference thier serial number to where it thinks they should go... and done... no input required by you the person...
if some how that config file goes away (never happened to me, but hey these things could happen? maybe?) so what? you start from scratch, but so what? all your data is still there, you will need to rerun parity, but big whoop-de-do...
my SAS2 backplane seems to like to reassign physical addresses every time it reboots, unraid just doesn't care... try that with real HW raid :-) go rearrange your drives... see how that works out for you...
I moved my parity and sage recording drives from the SAS backplane to the motherboard to try and get a little better bandwidth... 0 config needed for unraid... it just found their new physical addresses on a completely different bus/interface and went on with its day...
Thats the reason to use Synology SHR, and NOT raid. From what i'm reading, its the same, plop the drives in and go.
|