just wanted to see what people thoughts where on this
4 node cluster :
3x - 4 quad core processor, 32 GB Ram, sql 2005 x64 enterprise running on 2003 enterprise server x64
1 x - 2 quad core processor, 16 GB Ram, sql 2005 x64 enterprise running on 2003 enterprise server x64
fyi - I am aware that it is Best Practices is to have all nodes with the same hardware
The question I have is with Max Memmory setting. After working with PSS our current max memory is:
13 GB instance 1
12 GB instance 2
5 GB instance 3
It seems like a complete waste of resouces to keep our setting like that when the 3 main active node have 32GB.
I kind've understand why the best practice suggests this however in the real world it seems incredibly unlikely that 3 out of the 4 node would fail all at the same time. And if they did we would likely have bigger issues and not be worring about memory contention or startup time, especially if they all failed to less powerful passive node. Couldn't we always adjust it after as well? Any thoughts ? I am thinking at least all instances should go up to 15
Hi Matt, I can't seem to add a comment to your answer, so i am trying the question.
Thanks for the response, You are correct in your assumption. We have 3 instances running in a 4 node cluster.
I think the change came from a health check a year or two a ago. This is the reasoning we got recently when we reviewed it again: Since this is 64bit then you need to leave more memory available to the Kernel than we did in 32bit environments. So it’s best to leave at least 6GB to the Kernel/OS then divide the rest (26GB) amongst 3 instances. The allocation of the 26GB to the 3 instances don’t necessarily have to be 8GB a piece if you know that one or more instances do not require as much RAM and could resemble the following pattern: Instance 1 gets max server memory of 14GB, Instance 2 gets max server memory of 10GB and Instance 3 gets max server memory of 2GB.
It isn't that i disagree with it, I was just wondering about real world scenarios and if the added safety of limiting max memory by that much is worth the performance trade off for daily operations. What is the worst thing that could happen ? should we increase the max memory now and if it does fail over decrease it on the fly before the cache starts utilizing all of it ?
Answer by Matt Whitfield ·
Can you clarify a bit about what you mean? Because it seems to me like you have three separate sql server instances, each of which normally runs on one of the active nodes (so instance 1 normally runs on node a, instance 2 normally runs on node b) etc.
If that is the case, then you could, relatively safely update the max memory to 15 GB on each, as long as your failover strategy meant that instance 1 would fail to node b, instance 2 would fail to node c and instance 3 would fail to node a.
This would mean that with any of your 'active' nodes running two instances, the max commit for sql server would be 30GB, leaving 2GB for your OS to run in.
However, you haven't really detailed what led you and PSS to come to the conclusions you have done, so it's a bit difficult to say.
And I do thoroughly agree - if three of your nodes fail at the same time, then it's pretty likely all of your nodes will have failed because of some catastrophic failure of the data centre. Or somebody going in and turning off the air con because they were too cold (shortly before getting fired).
Answer by alen teplitsky ·
i would set it to around 28GB per instance. if you get a failure you can always change it manually.
we have 2 AA clusters and 1 PA cluster each with 32GB RAM per node and we never had a problem with two instances on a node and max memory set to 28GB for the instance