Jump to content RIP Sheffield Admin Mort

Question re NLB

Recommended Posts

a techie question here.

 

if you have 1x host (with say 3 x VM used as web servers).

 

the 1x host is physically connected to a physical HW NLB.

Obviously the NLB serves it purpose to share the load on all 3x VM web servers.

 

but surely because there is a cable going from the 1x host to the NLB surely theres still a bottle neck? Or am I not thinking straight?

Share this post


Link to post
Share on other sites

No, that's about right. Unless you can put multiple NICs in your host machine connected to a switch and serve each VM to a different NIC port or other which would reduce the bottleneck as then each VM's IP would be routed via it's own dedicated trunk.

 

In reality, this situation would depend on a couple of things, including the type of load balancer you're using (Do you really need a load balancer? Can't your VMs be QoS'd from the host machine?), the type of VMs being served (mail server would probably process a lot less traffic than your firewall for instance) and the actual network that you're using... 100/1000/10GbE?

 

In your instance, you're hosting 3 web servers. If all three of these are lightly used throughout the day and are connected to a single port 1000 then there'll probably be no noticeable bottleneck from the host machine's perspective, but it also would depend on the content being served i.e video streaming would take more bandwidth than standard HTML/CSS. You need to question how many concurrent clients are likely to connect at any one time. You're a lot more likely to encounter your bottleneck at the connection out to the 'net unless you're co-lo'd at a decent location such as Telehouse with a 10GbE connection out - your 'net connection itself will usually always be your bottleneck.

 

You'll always have a bottleneck somewhere, even if it's not a physical one - timezones still create bottlenecks to different areas due to the fact that more people are online at certain times of the day creating capacity issues and are more likely to hit certain servers/routers at different times of the day. Sometimes, there's not much you can do to alleviate certain bottlenecks and just have to live with it.

Share this post


Link to post
Share on other sites

thank you for the comprehensive answer. wow alot of things i didnt consider.

 

yeah its just an ASP/SQL driven website for people to submit data for an UK based 9-5 office environment so peak around 11-3 pm over a 1000 Gb network. Im allowing about 100 users per VM web server so 3 x web servers for 300 users . plus one VM clone out of NLB (from loadbalancer.org) as a back up just in case of anything.

 

then I got a UAT / test / train environment but these will be used minimally

Edited by cv65user

Share this post


Link to post
Share on other sites

Hold on, so you're running 3 separate VMs on one host machine to balance intranet website traffic for one website? i.e you're trying to in effect do a virtual cluster? If this is the case, I personally don't see any real world performance improvements in what you're trying to achieve, and may be over complicating the issue. In fact, you'll probably be wasting CPU cycles of the host machine by having it run the VMs rather than just running the machine as a dedicated web server in this instance if my understanding of your situation is indeed correct?

 

If you were running a small physical cluster of two machines with a primary and mirrored backup for redundancy then you could load balance against a decently managed switch and run DNS which would allow fall-over of the primary to secondary fairly easily. This doesn't sound too mission critical, else you could have also had some off-site redundancy built in too.

 

From a network admin perspective, I'd say you'd probably want to get your devs to allow as much client side processing as possible in order to maintain server performance e.g have client side do as much form sanity checking via javascript as possible in order to mitigate some of the traffic and server processing. You can still have the server do it's own sanity checking of the form/data in order to act as a catch-all, but if the client side can somewhat process the data ranges as they're input then it would potentially save bandwidth and server cycles from having the server reject something back to the client.

 

If it were me, I'd personally prefer any test environment to be physically separate from live if possible in order to provide maximum safety margin. If that's not possible, segment it as much as possible in order to minimise risk. I've heard too many stories and had to deal with fallout with a few things where someone thought that they were working on a test environment or backup when in reality their actions took out the live platform.

Share this post


Link to post
Share on other sites

Post removed

Edited by allysum

Share this post


Link to post
Share on other sites

As the above have said there is no real point to what's being done. A NLB is usually used to serve seperate physical systems in order to remove a single point of failure. Just connect everything direct, perhaps use a hardware device to offload SSL processing if needed and everything will be a lot easier and a little faster.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.