We are running SQL Server 2008 Standard Edition 32-bit on Windows Server 2003 R2 Enterprise Edition 32-bit with 16Gb of memory, of which 11Gb is allocated to SQL Server. The main OLTP database we use has just passed the 600Gb mark and another is 35Gb. Both are replaicted and log shipped to secondary servers.
Are we way under with our memory? If your main database was 600Gb what is the minimum memory you'd want on the server? i.e. to try and keep some of it and tempdb in memory.
NB. We can't up the 11Gb much as we have replication, log shipping running and a few other applications running - we did up it to 12Gb but started to get a few problems.
I think the maximum we can move to is 64Gb but I'm wondering if even this is enough? I welcome any thoughts people have.
asked Sep 23 '11 at 09:12 AM in Default
If you take an extreme view on this I could run a 600GB database on my netbook that has an Intel Atom processor and 2GB RAM - if I am the only person accessing the data and I dont write any crazy queries all will be fine. If I want to support an online shopping website for a global brand then I'm going to see problems. The memory you need to run with acceptable performance will be somewhere in between. I would certainly aim for upgrading your OS to 64bit. I have run a CRM system that supported 500 staff (approx 200 concurrent users as average) on a server with 20GB so your 16 may not be under resourced.
Points to address:
Unless you are churning GB of data between cycles then i dont think replication and Log Shipping will be a big stress. what is the cycle you are running - hourly/daily ?
check for system performance at times the repl and LS run.
At the end of the day more RAM certainly wont hurt and if the hardware is going to be in use for a long time then you may just need it sooner or later.
answered Sep 24 '11 at 08:56 AM
There isn't an equation of how much memory you need based on the size of your database. It is dependent on what your load is on the server. How often is the data being accessed? I have a database that is nearly 1TB in size and had been running just fine on a server with 12 GB of memory. 4 for the OS and 8 for SQL. I have recently moved it to a bigger box 16 core and 64 GB of memory but the move was simply to upgrade hardware and OS from an old 8 year old box. Does the system run better, sure, but that is also because we upgraded from SQL 2000 to 2005 so better query optimization, more ram, better index usage, etc.
You are running SQL 2008, take advantage of the DMO's that related to memory pressure and see if you are hitting any high points, get a good base line of your server to be able to compare good times against bad times. "If you aren't measuring it you can't manage it."
answered Sep 23 '11 at 10:34 AM
I agree with you all, but just want to add another opinion. One Of the mportant factors is, how much memory can you afford? SQL server loves memory.the more memory you have, the more cache is used and the less disk io are performed. Disk io and CPU (licens fees) are expensive, but memory is cheap. Does you largest table fit in the memory? If not, the data will spill over to disk if SQL have to scan the whole table.(trace for sort and hash warnings) Of course, you shouldn't spend too much.
answered Sep 23 '11 at 11:46 AM
John It depends on many factors. Best advice is to monitor your system during peak times. Here is a good TechNet article on Memory Bottlenecks and how to monitor. Scroll down to Memory Bottlenecks http://technet.microsoft.com/en-us/library/cc966540.aspx#EGAA
answered Sep 23 '11 at 10:19 AM