PDA

View Full Version : Oh I don't know



Bob Crandell
25-Jan-2012, 18:19
What do you want to talk about?

malcolmlewis
25-Jan-2012, 18:53
On Wed, 25 Jan 2012 17:19:51 GMT
Bob Crandell <bob@donttreadon.me> wrote:

> What do you want to talk about?
http://www.stuff.co.nz/travel/international/6307611/Fear-of-flying-Don-t-watch-this-video

--
Cheers Malcolm °¿° (Linux Counter #276890)
openSUSE 12.1 (x86_64) Kernel 3.1.0-1.2-desktop
up 2 days 17:43, 5 users, load average: 0.29, 0.14, 0.14
CPU Intel i5 CPU M520@2.40GHz | Intel Arrandale GPU

kgroneman
25-Jan-2012, 21:07
The old days

--
Kim - 1/25/2012 1:07:48 PM

Ben A L Jemmett
25-Jan-2012, 21:10
kgroneman wrote:
> The old days

Can it be that it was all so simple then?

--
Regards,
Ben A L Jemmett.
http://flatpack.microwavepizza.co.uk/

Mary Wood
25-Jan-2012, 23:13
Knees.

On 1/25/2012 11:19 AM, Bob Crandell wrote:
> What do you want to talk about?

GofBorg
25-Jan-2012, 23:20
Virtualization...

What is a good Dell/HP server for virtualizing a
small oracle database and an application server?
Currently is running on an old AIX server ~512MB memory,
dual 800Mhz cores. Database is small enough to fit on a USB
pen drive so storage needs to be only big enought to hold the virtual
machines and the wee bit of data.

Don't care for fancy storage options, local storage is fine but
I would like it to run ESX and have Vmwotion between two identical boxes.

Pricing needs to come in under $40k all in.

Lance Haig
25-Jan-2012, 23:31
On 25/01/12 22:20, GofBorg wrote:
> Virtualization...
>
> What is a good Dell/HP server for virtualizing a
> small oracle database and an application server?
> Currently is running on an old AIX server ~512MB memory,
> dual 800Mhz cores. Database is small enough to fit on a USB
> pen drive so storage needs to be only big enought to hold the virtual
> machines and the wee bit of data.
>
> Don't care for fancy storage options, local storage is fine but
> I would like it to run ESX and have Vmwotion between two identical boxes.
>
> Pricing needs to come in under $40k all in.
>
>


How a bout this

3 X DL360G5 2 disks each.
3 NIC's each
1 for storage
1 for access
1 for mgmt

setup on of the 360's and a OSS SAN somehting like opennas etc.. create
an NFS share for the vMware servers.

Install vcenter on a vm in esx
p2v your other vm. Done



vcenter license (this is expensive )

depending on your budget I would try to get 3 vm hosts so you don't get
split brain.

Hope this helps

Lance

GofBorg
26-Jan-2012, 00:32
Sort of what I was thinking except not sure
why I would need or want a SAN other than being
able to mount the same set of disks under all three
servers. I'm rather fearful of the all eggs in a basket
scenario with SAN but then again all eggs are in a RAID
array as well and I've had more than one RAID array go
bye bye over the years.

I was thinking of a two box setup with arrays in each machine with a nightly
rsync of the main VM to the secondary box which would act as a repository
of the production system. The second box would be available for
v-motion as well.

Am I wrong thinking here? I've done a bit of virtualization but nothing
sophisticated. I like the idea of having a live spare or two as it were.

Bob Crandell
26-Jan-2012, 01:27
On Wed, 25 Jan 2012 22:13:37 +0000, Mary Wood wrote:

> Knees.
>
> On 1/25/2012 11:19 AM, Bob Crandell wrote:
>> What do you want to talk about?

Thine knees or bees knees?

Bob Crandell
26-Jan-2012, 01:30
On Wed, 25 Jan 2012 17:53:03 +0000, malcolmlewis wrote:

> On Wed, 25 Jan 2012 17:19:51 GMT
> Bob Crandell <bob@donttreadon.me> wrote:
>
>> What do you want to talk about?
> http://www.stuff.co.nz/travel/international/6307611/Fear-of-flying-Don-
t-watch-this-video

Looks like fun.

Douglas Black
26-Jan-2012, 02:24
Mary Wood,

> Knees.

I see what you did there.

unsigned
26-Jan-2012, 15:23
vMotion can only be used with shared storage, it moves the running
processes and active memory, not the VM on disk. There are several
options for turning local storage into shared, though I don't have much
experience with anything other than Lefthand VSA.

Now, thinking out loud, I don't think rsync would be an option... unless
maybe the 'SAN' is NFS. Still, I'm not sure reading the large VM files
would be healthy without the vmware system being aware. You could go
third party like Veeam or other backup plug in, that would work.

There are several multi-node licensing bundles for vMware that are very
affordable and they do have a storage component now as well just for
these types of environments.


On 1/25/2012 5:32 PM, GofBorg wrote:
> Sort of what I was thinking except not sure
> why I would need or want a SAN other than being
> able to mount the same set of disks under all three
> servers. I'm rather fearful of the all eggs in a basket
> scenario with SAN but then again all eggs are in a RAID
> array as well and I've had more than one RAID array go
> bye bye over the years.
>
> I was thinking of a two box setup with arrays in each machine with a nightly
> rsync of the main VM to the secondary box which would act as a repository
> of the production system. The second box would be available for
> v-motion as well.
>
> Am I wrong thinking here? I've done a bit of virtualization but nothing
> sophisticated. I like the idea of having a live spare or two as it were.
>

GofBorg
26-Jan-2012, 18:21
Currently on another setup I am using a system where the VM gets suspended
in the middle of the night and the vmware directories get rsync'ed to
another server. That is using KVM on Linux, not sure it would even be
possible with ESX. It works well, but I was thinking it might be nice to
have a setup where if a server failed it would switch to another running
system. However, with the information that I am getting now it sounds like
it will only protect against the loss of a server 'box' meaning motherboard,
power supply, memory issues, NIC where the server goes away. Doesn't address
anything at the disk array level which is where nearly 100% of the failures
I've experienced occur.

I'll have to revisit my strategy. Maybe clustering with HA is more what I
need.

unsigned
26-Jan-2012, 19:02
Yep, suspending a VM before copy would work.. and there are ways to do
that in the ESXi world. Its not linux though and its not an open
architecture.

HA basically restarts a VM on another host from the shared storage if
the current host goes down or becomes otherwise unresponsive. FT (big
licensing) runs 2 VM's in 'lockstep' on two hosts at the same time -
from shared storage. Neither of these protect against storage level
failures.

I know you know this, but I'm saying it anyway. Similar to the OSI
model, the hypervisor sits on top of the hardware layer. It doesn't care
if the disk is FC, iSCSI, SAS, SATA, etc... It just needs disk.

We see individual drive and power supply failures as our #1 and #2
maintenance items. Both are easily overcome by a level of redundancy.
Then again, I also refuse to white box storage arrays (and pretty much
anything but my own personal systems). I've only had to deal with two
array failures in 15 years over countless systems, a record that I think
is good. One was due to bad firmware and the other was just a stray
cosmic ray that struck during a hot drive change.


On 1/26/2012 11:21 AM, GofBorg wrote:
> Currently on another setup I am using a system where the VM gets suspended
> in the middle of the night and the vmware directories get rsync'ed to
> another server. That is using KVM on Linux, not sure it would even be
> possible with ESX. It works well, but I was thinking it might be nice to
> have a setup where if a server failed it would switch to another running
> system. However, with the information that I am getting now it sounds like
> it will only protect against the loss of a server 'box' meaning motherboard,
> power supply, memory issues, NIC where the server goes away. Doesn't address
> anything at the disk array level which is where nearly 100% of the failures
> I've experienced occur.
>
> I'll have to revisit my strategy. Maybe clustering with HA is more what I
> need.

KBOYLE
26-Jan-2012, 19:03
GofBorg wrote:

> Virtualization...
>
> What is a good Dell/HP server for virtualizing
>
> Pricing needs to come in under $40k all in.

Have you looked at this? It's not Dell/HP but it sure has some nice
features.
http://www.intelmodularserver.com/

--
Kevin Boyle
Calgary, Alberta Canada

GofBorg
26-Jan-2012, 20:20
> Yep, suspending a VM before copy would work.. and there are ways to do
> that in the ESXi world. Its not linux though and its not an open
> architecture.
That is something that keeps me out of the VMware camp actually. I love
the product otherwise. I still use Workstation on my PC.

> HA basically restarts a VM on another host from the shared storage if
> the current host goes down or becomes otherwise unresponsive. FT (big
> licensing) runs 2 VM's in 'lockstep' on two hosts at the same time -
> from shared storage.
I will propse two different configurations and let the bean counters decide.

> We see individual drive and power supply failures as our #1 and #2
> maintenance items.
That would be the same for me.

> Both are easily overcome by a level of redundancy.
The problem I've ended up with is multi-drive failures in an array.
It seems like once the array gets X years on it, the drives go on strike.
My record is about the same as yours. 2 arrays in 15 years, but even 1
is painful enough. The system I am working on replacing is now 15 years old,
obviously a lot has changed since then. I'm just gathering information
to propose a new system that will have the same level of reliability with
added flexibility and at lower cost. Having worked with virtualization for
a number of years now, it is my opinion that being able to port virtual
machines around across hardware would provide a significant additional
capability.

Bob Crandell
26-Jan-2012, 20:27
A headline in Programmers Paradise:
The NEW VMware Workstation 8
Run a full web or cloud environment right on your PC!

Does that make it vaporware?

GofBorg
26-Jan-2012, 20:44
> A headline in Programmers Paradise:
> The NEW VMware Workstation 8
> Run a full web or cloud environment right on your PC!
>
> Does that make it vaporware?

Well, they do allow 'instance sharing' now as well.
You could build a cloud on your PC, all the elements are there,
and if you stuck it in a co-lo somewhere...well there ya go.

Here is what I think happened. VMware slowly allowed VMware server/GSX
to all but die but realized that they needed something to compete in
the small server space with the growing popularity of MS, KVM etc so they
did a bit of tweaking to the licensing and interface of Workstation and
voila'. Problem is, I don't trust them anymore. I can see getting an
envrionment set up and based around workstation only to have them pull the
rug out on that idea. I have a hard time with companies that have policies
that change like the wind. One reason we never heavily invested in Novell
products outside of Netware, you never new if they were really committed to
the product.

unsigned
26-Jan-2012, 21:56
Most of the workstation 8 improvements are spun really hard by
marketing. They Basically made it so Workstation 8 and ESXi work
together better, but not seamlessly.

Workstation 8 can act as a partially featured administration client for
VM's on an ESXi cluster (but not hosts) and can easily send VM's to that
cluster. Going from Cluster to local Workstation 8 install still isn't
any different (shutdown and copy).

One kind of wonders why it took 5 (#'s of esx/i) versions to get that
functionality... But yeah, its here.

Your comment on Novell's product commitment is ironic considering Novell
walked away from NetWare and not its other core products like GroupWise,
IDM and Zen. You win some, you lose some I guess. :D


On 1/26/2012 1:44 PM, GofBorg wrote:
>> A headline in Programmers Paradise:
>> The NEW VMware Workstation 8
>> Run a full web or cloud environment right on your PC!
>>
>> Does that make it vaporware?
>
> Well, they do allow 'instance sharing' now as well.
> You could build a cloud on your PC, all the elements are there,
> and if you stuck it in a co-lo somewhere...well there ya go.
>
> Here is what I think happened. VMware slowly allowed VMware server/GSX
> to all but die but realized that they needed something to compete in
> the small server space with the growing popularity of MS, KVM etc so they
> did a bit of tweaking to the licensing and interface of Workstation and
> voila'. Problem is, I don't trust them anymore. I can see getting an
> envrionment set up and based around workstation only to have them pull the
> rug out on that idea. I have a hard time with companies that have policies
> that change like the wind. One reason we never heavily invested in Novell
> products outside of Netware, you never new if they were really committed to
> the product.
>
>

Joseph Marton
26-Jan-2012, 22:33
GofBorg wrote:

> Don't care for fancy storage options, local storage is fine but
> I would like it to run ESX and have Vmwotion between two identical
> boxes.
>
> Pricing needs to come in under $40k all in.

Last year I ordered three new Cisco UCS servers, fully loaded with dual
6-core processors each FC HBAs, the works and I think all three
combined with 3-year 24x7x4 Smartnet came in at under $40k. I didn't
have much local storage. Then again... if you want VMotion you can't
really use local storage. You need shared storage. Could probably do
a config like what I did, but with the the third server put a ton of
storage in it, install SLES, and make it a cheap iSCSI SAN. (Could
even put in iSCSI HBAs into the other two servers.)

--
Does this washcloth smell like chloroform?

Joseph Marton
26-Jan-2012, 22:39
Joseph Marton wrote:

> Last year I ordered three new Cisco UCS servers, fully loaded with
> dual 6-core processors each FC HBAs, the works and I think all three
> combined with 3-year 24x7x4 Smartnet came in at under $40k.

Ok, I lied, solution came in at about $43k.

3 x UCS 250 M2
- 2xIntel X5670 CPU
- 64GB RAM
- LSI RAID controller
- Intel quad port NIC
- QLogic QLE2562 dual-port 8Gbps FC HBA
- 2 x 146GB SAS 10K RPM HDD
- redundant power
- 3 year 24x7x4 onsite Smartnet

This environment ran vSphere 4 (ESXi 4.1) quite nicely. I'm sure you
could adapt that configuration to your needs and get it under $40k.
I'd definitely recommend it.

Or... if you want to go Dell... I've got a buddy there who might be
able to help you. :-)

But HP is another story... I had a terrible time with their support.

--
Does this washcloth smell like chloroform?

GofBorg
27-Jan-2012, 00:49
> This environment ran vSphere 4 (ESXi 4.1) quite nicely. I'm sure you
> could adapt that configuration to your needs and get it under $40k.
> I'd definitely recommend it.

It's pretty amazing what specs you can get on servers these days. It's only
slightly beyond overkill for what this project calls for which is why the
current system has been valid for 15 years.

> Or... if you want to go Dell... I've got a buddy there who might be
> able to help you. :-)

Just munging in the websites, Dell is my preference, plus we have other Dell
servers. I think I'm going to go with SLES and KVM. SLES is a bit less picky
at the hardware level than ESX and KVM is in the kernel.

I'd be interested in talking with your Dell contact.
Shoot an email to oregfnkol@lnubb.pbz <--ROT13

I'm looking at RAID10 so perhaps my storage reliability will be met in that
regard.

GofBorg
27-Jan-2012, 20:01
> Your comment on Novell's product commitment is ironic considering Novell
> walked away from NetWare and not its other core products like GroupWise,
> IDM and Zen. You win some, you lose some I guess. :D

Yep.

Bob Crandell
31-Jan-2012, 01:31
On Wed, 25 Jan 2012 20:07:57 +0000, kgroneman wrote:

> The old days

Speaking of that. One of my clients is really worried now:
http://www.nytimes.com/2012/01/26/technology/personaltech/protecting-a-
cellphone-against-hackers.html?_r=2&scp=1&sq=cellphone%20security&st=cse

Just like I tell Windows users, "To minimize problems stay off the
internet and don't do email."