Hi, Has anyone had the chance of testing out the new In-Memory database thing in SQL Server 2014? I've been playing with it for more than a week now and I must honestly say that I am very dissapointed of it. There are quite some bugs (critical in my opinion) and I've been wondering if anyone else encountered the same nasty situations. Here are a couple of these bugs that I've discovered: 1) In a 4 GB of RAM machine I created an in memory dedicated database with an in memory table. I then tried to insert in this new table the content of the same table from another non memory-optimized database( this was also a non optimized table) which had aprox 11 GB of physical space. The moment I ran the script I was expecting to see an error message, to be told that there is not enough ram space available but the script started. I thought to myself that perhaps it uses a huge compression rate (>11x) and therefore it will upload the entire information as compressed data. BUT, after 20 minutes of inserting, I got an error message saying that the default buffer pool is full. I then tried to drop, delete or truncate the table (btw, truncate does not work with memory-optimized tables) but every time I tried that I got the same error message saying there isn't enough memory left in the default pool to complete the task. Also, any other script, not related to this in-memory table, ended up with the same error message (eg: select * from sys.tables) Wth?! I tried to allocate more memory to this default pool but it was already (by default I guess) allocated to 80% of total available memory which was pretty much all the the OS was willing to allocate to my SQL server instance. I ended up killing every non-critical process/service on the machine so I could free up more RAM to allocate to this default pool so that I could drop the table. Not cool MS! I was expecting that at parse time the query engine would be able to estimate the size of the incoming data and compare to the remaining memory and at least provide me with a warning message or something (these counters exists in SQL Server since SQL Server 2005 or SQL Server 2008, so why not use them, MS team?!?)... Anyway, I went to my manager with this and he got me a new machine to continue the testing on, this time with 32 GB of RAM. 2) On this new machine, I was finally able to import the table I was interested on so this time, if the physical stored table had 11 GB the new in-memory table now booked 9 GB of RAM (not such an impressive compression rate I must say). I started issuing a lot of queries against it, while the Windows Resource Monitor was in background. I noticed that after a couple of select queries against this table, SQL loaded up to 30 GB of RAM, although, right after I finished with the inserts in this table, it only sucked up 20GB of RAM. After a couple more select queries, including actual or estimated execution plans, it finally "died". Meaning, if a select * query against this table, right after the loading ended, took for about 5 seconds, now it outputed the results in 2-3 minutes! All this time, 30 GB of RAM were allocated to SQL Server. My first guess was that, perhaps because of me, it couldn't release the RAM it reserved all this time. So I decided to restart the instance, getting us to bug no. 3 :) 3) After restarting the instance (service), with the Resource Monitor on, right after the SQL Server service stopped, it release all 30 GB of RAM at once! Nice.... :-D A couple of seconds after it, it started to suck up batches of 1-2 GB of RAM per second until it reached 11 GB of RAM. Cool I thought! Now I can resume my testings! ... Not so fast, pal! After refreshing the database nodes under the instance name in Management Studio, I noticed that my memory-optimzed database was In Recovery! Ok, let's wait for a while (although, it seemed that it loaded up my entire table in RAM). I waited for 5 minutes... nothing! I went for a smoke, camed back, still in recovery! I then opened the Event Viewer, filtered against the MSSQL2014 event and started browsing thru the events. At 11:29 AM, the SQL Server Service started, along with the (Event) recovery database. It was at 3% in phase 1 out of 3. Ok, let's wait some more. Long story short, it took 1h:20m to recover the database (started at 11:29 ended at 12:51). Really? ... Now, imagine all the above 3 bugs in a production environment. How would you feel about that when you have a 2 hour window to refresh your DWH and: * after 20 minutes of inserting, you get an error message saying you don't have enough free RAM left * you can't take any actions in freeing that memory - how fast can you scale then? :) * imagine one hour of intensive queries (reports) ran against this database that would fill (AND NEVER RELEASE) your entire memory * in case of a system maintainance restart (or any other service restart), you would have to wait a couple of hours so that your database finished recovery My manager now asks me if we should pay 11k Eur on a 4 core license for SQL Server 2014. What should I say? Go for it? No way...
So, what's your question? The in-memory tables are fairly limited. Primarily their purpose is to make data collection really, really fast within an OLTP system, eliminating latches and locks for an optimistic locking strategy rather than SQL Server's traditional pessimistic one. Within that, and within the limits on data types, etc., all my testing shows pretty massive performance improvements. But, this is for really large scale systems. Your first attempt at testing the thing on a system with 4gb of memory and attempting to put an 11gb table in... that's just wrong, and goes against every bit of documentation and every recommendation for approach to how this system works. You must have enough memory to support the tables you define. It's not a bug that it let you shoot yourself in the foot despite going past all the suggestions and documentation. SQL Server lets people do all sorts of things that they really shouldn't. It really sounds like you're testing this out on a seriously underpowered machine without taking into account what the system is designed for. Do I recommend you jump on to SQL Server 2014 for the in-memory functionality? No. But, if you're going to, I do recommend you evaluate if you actually need the in-memory functionality and then evaluate whether or not the system you're going to implement it on will support it appropriately. You really can't put it on a system that can't support it and then say it doesn't work.
@Grant Fritchey : have you even read everything I wrote before replying? 1) What is my question? It's not quite in there, but I guess you don't have to be a genius to figure it out... Here goes then so that there won't be any room for interpretation: Question: Has anyone else encountered the above mentioned bugs in SQL Server 2014 while testing the new In-Memory functionality, regarding BOTH memory-optimized tables AND memory optimized database (aka - contains memory_optimized_data)? 2) Again, have you even read everything I wrote or did you just got bored after the first paragraph and jumped right to conclusions? I did changed the machine I ran my initial tests on, with a 32 GB of Ram machine. And I did mentioned other bugs in my tests, but if you are too busy to read all the way down, at least don't bother commenting just for the sake of commenting. It's really optional, you know? Cheers!
In regard to your #2 bug, Grant addressed that eloquently enough in regards to memory use. It seems you are also concerned about compression savings for the table. It would seem you have BLoBs, SLoBs or both. Is that accurate? Do you have a lot of character type fields with mostly unique data and the field filled with data? Those scenarios will cause a reduction in compression efficiency (same is true of most compression engines).