question

sergiu avatar image
sergiu asked

SQL Server 2014 In-Memory Database

Hi, Has anyone had the chance of testing out the new In-Memory database thing in SQL Server 2014? I've been playing with it for more than a week now and I must honestly say that I am very dissapointed of it. There are quite some bugs (critical in my opinion) and I've been wondering if anyone else encountered the same nasty situations. Here are a couple of these bugs that I've discovered: 1) In a 4 GB of RAM machine I created an in memory dedicated database with an in memory table. I then tried to insert in this new table the content of the same table from another non memory-optimized database( this was also a non optimized table) which had aprox 11 GB of physical space. The moment I ran the script I was expecting to see an error message, to be told that there is not enough ram space available but the script started. I thought to myself that perhaps it uses a huge compression rate (>11x) and therefore it will upload the entire information as compressed data. BUT, after 20 minutes of inserting, I got an error message saying that the default buffer pool is full. I then tried to drop, delete or truncate the table (btw, truncate does not work with memory-optimized tables) but every time I tried that I got the same error message saying there isn't enough memory left in the default pool to complete the task. Also, any other script, not related to this in-memory table, ended up with the same error message (eg: select * from sys.tables) Wth?! I tried to allocate more memory to this default pool but it was already (by default I guess) allocated to 80% of total available memory which was pretty much all the the OS was willing to allocate to my SQL server instance. I ended up killing every non-critical process/service on the machine so I could free up more RAM to allocate to this default pool so that I could drop the table. Not cool MS! I was expecting that at parse time the query engine would be able to estimate the size of the incoming data and compare to the remaining memory and at least provide me with a warning message or something (these counters exists in SQL Server since SQL Server 2005 or SQL Server 2008, so why not use them, MS team?!?)... Anyway, I went to my manager with this and he got me a new machine to continue the testing on, this time with 32 GB of RAM. 2) On this new machine, I was finally able to import the table I was interested on so this time, if the physical stored table had 11 GB the new in-memory table now booked 9 GB of RAM (not such an impressive compression rate I must say). I started issuing a lot of queries against it, while the Windows Resource Monitor was in background. I noticed that after a couple of select queries against this table, SQL loaded up to 30 GB of RAM, although, right after I finished with the inserts in this table, it only sucked up 20GB of RAM. After a couple more select queries, including actual or estimated execution plans, it finally "died". Meaning, if a select * query against this table, right after the loading ended, took for about 5 seconds, now it outputed the results in 2-3 minutes! All this time, 30 GB of RAM were allocated to SQL Server. My first guess was that, perhaps because of me, it couldn't release the RAM it reserved all this time. So I decided to restart the instance, getting us to bug no. 3 :) 3) After restarting the instance (service), with the Resource Monitor on, right after the SQL Server service stopped, it release all 30 GB of RAM at once! Nice.... :-D A couple of seconds after it, it started to suck up batches of 1-2 GB of RAM per second until it reached 11 GB of RAM. Cool I thought! Now I can resume my testings! ... Not so fast, pal! After refreshing the database nodes under the instance name in Management Studio, I noticed that my memory-optimzed database was In Recovery! Ok, let's wait for a while (although, it seemed that it loaded up my entire table in RAM). I waited for 5 minutes... nothing! I went for a smoke, camed back, still in recovery! I then opened the Event Viewer, filtered against the MSSQL2014 event and started browsing thru the events. At 11:29 AM, the SQL Server Service started, along with the (Event) recovery database. It was at 3% in phase 1 out of 3. Ok, let's wait some more. Long story short, it took 1h:20m to recover the database (started at 11:29 ended at 12:51). Really? ... Now, imagine all the above 3 bugs in a production environment. How would you feel about that when you have a 2 hour window to refresh your DWH and: * after 20 minutes of inserting, you get an error message saying you don't have enough free RAM left * you can't take any actions in freeing that memory - how fast can you scale then? :) * imagine one hour of intensive queries (reports) ran against this database that would fill (AND NEVER RELEASE) your entire memory * in case of a system maintainance restart (or any other service restart), you would have to wait a couple of hours so that your database finished recovery My manager now asks me if we should pay 11k Eur on a 4 core license for SQL Server 2014. What should I say? Go for it? No way...
sql-server-2014oltp
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Grant Fritchey avatar image
Grant Fritchey answered
So, what's your question? The in-memory tables are fairly limited. Primarily their purpose is to make data collection really, really fast within an OLTP system, eliminating latches and locks for an optimistic locking strategy rather than SQL Server's traditional pessimistic one. Within that, and within the limits on data types, etc., all my testing shows pretty massive performance improvements. But, this is for really large scale systems. Your first attempt at testing the thing on a system with 4gb of memory and attempting to put an 11gb table in... that's just wrong, and goes against every bit of documentation and every recommendation for approach to how this system works. You must have enough memory to support the tables you define. It's not a bug that it let you shoot yourself in the foot despite going past all the suggestions and documentation. SQL Server lets people do all sorts of things that they really shouldn't. It really sounds like you're testing this out on a seriously underpowered machine without taking into account what the system is designed for. Do I recommend you jump on to SQL Server 2014 for the in-memory functionality? No. But, if you're going to, I do recommend you evaluate if you actually need the in-memory functionality and then evaluate whether or not the system you're going to implement it on will support it appropriately. You really can't put it on a system that can't support it and then say it doesn't work.
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

sergiu avatar image
sergiu answered
@Grant Fritchey : have you even read everything I wrote before replying? 1) What is my question? It's not quite in there, but I guess you don't have to be a genius to figure it out... Here goes then so that there won't be any room for interpretation: Question: Has anyone else encountered the above mentioned bugs in SQL Server 2014 while testing the new In-Memory functionality, regarding BOTH memory-optimized tables AND memory optimized database (aka - contains memory_optimized_data)? 2) Again, have you even read everything I wrote or did you just got bored after the first paragraph and jumped right to conclusions? I did changed the machine I ran my initial tests on, with a 32 GB of Ram machine. And I did mentioned other bugs in my tests, but if you are too busy to read all the way down, at least don't bother commenting just for the sake of commenting. It's really optional, you know? Cheers!
4 comments
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Grant Fritchey avatar image Grant Fritchey ♦♦ commented ·
I did read it. Your first bug, isn't one. You ignored all the documentation and attempted to do something completely unsupported and advised against. Your second "bug" is normal SQL Server behavior. It loaded up memory to either the max memory set on the instance or to the amount that the OS allowed it. This came from the queries you were running in all likelihood. Not a bug, and not worth commenting on. Your final statement, restarting the server to release memory (free proccache, free buffers, any standard mechanism for cleaning out memory would have worked just as well as restarting the instance) relates to recovery. Something else was happening on the system. Based on your other statements, I'm not sure what. I've done multiple recoveries with in-memory tables and natively compiled procedures and not hit the issue you hit. You should look to the recovery steps in the log to see what it was doing that took so long. No, it's not normal behavior. So, to answer your question, no, I haven't seen this type of behavior.
1 Like 1 ·
CirqueDeSQLeil avatar image CirqueDeSQLeil commented ·
While I agree mostly with Grant, I will deviate slightly in that the recovery is a normal behavior. Will it take that long to recover every time? No. But if transactions were running that needed to be rolled forward or back, then it could take that long. With the memory ramp-up, I am curious what you have your min memory, max memory and total available memory set at on this server.
1 Like 1 ·
JohnM avatar image JohnM commented ·
Yeah, let's make snarky comments about someone who is trying to help you and give you advice. For free. Bad form.
0 Likes 0 ·
Grant Fritchey avatar image Grant Fritchey ♦♦ commented ·
Yeah, I agree with @cirquedesqleil, recovery is normal. I just meant that recovery taking that long is not normal. Sorry for the lack of clarity.
0 Likes 0 ·
CirqueDeSQLeil avatar image
CirqueDeSQLeil answered
In regard to your #2 bug, Grant addressed that eloquently enough in regards to memory use. It seems you are also concerned about compression savings for the table. It would seem you have BLoBs, SLoBs or both. Is that accurate? Do you have a lot of character type fields with mostly unique data and the field filled with data? Those scenarios will cause a reduction in compression efficiency (same is true of most compression engines).
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Write an Answer

Hint: Notify or tag a user in this post by typing @username.

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.