question

g003p avatar image
g003p asked

Source Controlling Proposed Enviroment & Suggestions

Looking for advice from those who have successfully source controlled their environment. Current Enviroment: All devs in production making changes out of date dev enviroment, not really used yet Source Control Requirements: Follow a rapid release cycle, thinking Continual Integration Methodology Allow nightly production update of those changes. Most of our work can be dependent on up to date data. Management wishes to utilize the snapshot technology and refresh weekly. This means our dev enviroment would end up being rebuilt weekly. Problems I'm Running Into: First proposed structure I was reviewing was 3 tier: DEV > BUILD > LIVE 1. Replicate LIVE environment in Dev (Shared Dev Enviroment Machine) 2. Protect LIVE from any changes 3. Source Control DEV 4. Commit only items ready to be pushed to BUILD and then upon no issues PROD in evening. 5. Commit triggers Red Gate SQL Compare to push changes through to BUILD 6. If no failures on BUILD machine, at end of night push through all committed changes to LIVE. Problem: No updated data. Most of our apps have many referential integrity relationships and we don't want to manually try to script pulling in subsets of data that may have other relationships. New proposed structure: DEV > BUILD > LIVE 1. Replicate LIVE in Dev (Shared Dev Enviroment Machine) 2. Commit all changes 3. Commit triggers Red Gate SQL Compare to push changes through to BUILD 4. If no failures on BUILD machine, at end of night push through all committed changes to LIVE. 5. Source control BUILD? Only committed BUILD changes go to LIVE 6. Weekly refresh occurs, snapshot/bookmark moved, and all DEV work has been wiped out. 7. Run SQL Compare from repository against new database, and get back all the "lost work". Problem: 1. No differentiation between dev only and ready for release committed changes in first step 2. Requires source controlling both DEV & BUILD, which seems to be extra work and not truly follow source control process. Wouldn't it get out of sync easily? 3. Doesn't follow any known methodology I've ready about for Agile & CI database work. What do you recommend? Assume for the sake of this discussion that I HAVE to have the snapshot reset dev weekly. Assume I can't work with empty schema/test data/limited subsets because of all the intertwining of our database and application work. I really need to work with updated data. The first structure follows Red Gate's proposed CI model, but I'm not sure it will work for us. The local sandbox also won't work due to size of the databases. We'd really like to stick with a shared environment. EDIT: Read about SQL Virtual restore, only to find that it was discontinued. Since we need to work with production data, the idea of having cloud sandboxes for each of the database instances isn't bad at all. Is there any option out there for creating "virtual" sandboxes that point to the backup file, allowing the users to work with data, and yet update their sandbox separately? Also, upon command, refresh new objects in their sandbox?
sql-server-2008best-practicesource-control
1 comment
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Grant Fritchey avatar image Grant Fritchey ♦♦ commented ·
If any of the answers below were helpful to you, please show that by clicking on the thumbs up next to the answer. If any of the answers below solved your problem, please indicate that by clicking on the check box.
0 Likes 0 ·

1 Answer

·
Grant Fritchey avatar image
Grant Fritchey answered
I don't mind saying, I'm a little nervous at the idea of automated builds going out nightly without a human eyeballing the results in some fashion, but if you're comfortable with it, that's fine. A shared development environment makes this somewhat harder, but not impossible. The key is going to be ensuring that you have a VERY good process for marking when changes within the dev environment are ready for promotion. I don't think you'll want to just rely on committing to source control. You may want some sort of label or other tagging mechanism as well. That way you only ever test the stuff that is ready. After that, if I were going to implement an automated deployment, I would have a second label for successful builds on your BUILD environment. That way you only ever try to deploy stuff that successfully passed the BUILD process. Even better, just take the script that was successfully run and then run it again on production. Which brings up my one major issue with your approach. Weekly is not often enough to refresh the environment. You have changes occurring, in theory, daily. And a test against those changes also daily. I would suggest that you need to refresh that build process on a daily basis. That way you're always testing against a copy of production. This ensures that any tested script will, at least in theory, always work.
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Write an Answer

Hint: Notify or tag a user in this post by typing @username.

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.