Suggestions for Max Memory configuration in a 4 node A/A/A/P cluster

just wanted to see what people thoughts where on this

4 node cluster :
3x - 4 quad core processor, 32 GB Ram, sql 2005 x64 enterprise running on 2003 enterprise server x64
1 x - 2 quad core processor, 16 GB Ram, sql 2005 x64 enterprise running on 2003 enterprise server x64

fyi - I am aware that it is Best Practices is to have all nodes with the same hardware

The question I have is with Max Memmory setting. After working with PSS our current max memory is:
13 GB instance 1
12 GB instance 2
5 GB instance 3

It seems like a complete waste of resouces to keep our setting like that when the 3 main active node have 32GB.

I kind've understand why the best practice suggests this however in the real world it seems incredibly unlikely that 3 out of the 4 node would fail all at the same time. And if they did we would likely have bigger issues and not be worring about memory contention or startup time, especially if they all failed to less powerful passive node. Couldn't we always adjust it after as well? Any thoughts ? I am thinking at least all instances should go up to 15

Hi Matt, I can't seem to add a comment to your answer, so i am trying the question.

Thanks for the response, You are correct in your assumption. We have 3 instances running in a 4 node cluster.
I think the change came from a health check a year or two a ago. This is the reasoning we got recently when we reviewed it again: Since this is 64bit then you need to leave more memory available to the Kernel than we did in 32bit environments. So it’s best to leave at least 6GB to the Kernel/OS then divide the rest (26GB) amongst 3 instances. The allocation of the 26GB to the 3 instances don’t necessarily have to be 8GB a piece if you know that one or more instances do not require as much RAM and could resemble the following pattern: Instance 1 gets max server memory of 14GB, Instance 2 gets max server memory of 10GB and Instance 3 gets max server memory of 2GB.

It isn't that i disagree with it, I was just wondering about real world scenarios and if the added safety of limiting max memory by that much is worth the performance trade off for daily operations. What is the worst thing that could happen ? should we increase the max memory now and if it does fail over decrease it on the fly before the cache starts utilizing all of it ?

more ▼

asked Oct 15, 2009 at 12:29 PM in Default

avatar image

66 4 6 10

Brian, you'll need to build up your 'rep' by answering questions and asking questions that get voted for to gain the ability to do things like comment. See the FAQ for details!

Oct 16, 2009 at 12:58 PM Melvyn Harbour 1 ♦♦

Brian, I'm really sorry, I only just noticed you had edited your question. The worst thing that could happen is SQL Server asks for more memory than is available, and/or starts squeezing the O/S. You can't say 'limit these three instances to X between them'. But, seeing as they naturally run on separate clusters, then yes I would up the max memory, personally, and respond to the situation in case of a failover.

Nov 05, 2009 at 06:30 AM Matt Whitfield ♦♦
(comments are locked)
10|1200 characters needed characters left

2 answers: sort voted first

Can you clarify a bit about what you mean? Because it seems to me like you have three separate sql server instances, each of which normally runs on one of the active nodes (so instance 1 normally runs on node a, instance 2 normally runs on node b) etc.

If that is the case, then you could, relatively safely update the max memory to 15 GB on each, as long as your failover strategy meant that instance 1 would fail to node b, instance 2 would fail to node c and instance 3 would fail to node a.

This would mean that with any of your 'active' nodes running two instances, the max commit for sql server would be 30GB, leaving 2GB for your OS to run in.

However, you haven't really detailed what led you and PSS to come to the conclusions you have done, so it's a bit difficult to say.

And I do thoroughly agree - if three of your nodes fail at the same time, then it's pretty likely all of your nodes will have failed because of some catastrophic failure of the data centre. Or somebody going in and turning off the air con because they were too cold (shortly before getting fired).

more ▼

answered Oct 16, 2009 at 08:54 AM

avatar image

Matt Whitfield ♦♦
29.5k 62 66 88

(comments are locked)
10|1200 characters needed characters left

i would set it to around 28GB per instance. if you get a failure you can always change it manually.

we have 2 AA clusters and 1 PA cluster each with 32GB RAM per node and we never had a problem with two instances on a node and max memory set to 28GB for the instance

more ▼

answered Oct 21, 2009 at 04:51 PM

avatar image

alen teplitsky
21 2 2 4

(comments are locked)
10|1200 characters needed characters left
Your answer
toggle preview:

Up to 2 attachments (including images) can be used with a maximum of 524.3 kB each and 1.0 MB total.

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here



Answers and Comments

SQL Server Central

Need long-form SQL discussion? SQLserverCentral.com is the place.



asked: Oct 15, 2009 at 12:29 PM

Seen: 4757 times

Last Updated: May 17, 2013 at 02:04 AM

Copyright 2018 Redgate Software. Privacy Policy