I must ask if it is necessary to do an update on billions of rows, why not batch process them? It can be orders of magnitude quicker to batch up large updates/inserts/deletes to a table. This has to do with locking, blocking and logging of these actions. I suggest you look into creating a job/procedure that will process `N` rows and possible place that in a loop or a regularly re-occuring SQL Agent job. If you do it this way, you will see how much faster it can be, as well as reducing locking/blocking on your system (don't know if this is an issue at present).
I agree with WilliamD. It is not an optimal solution to try to update billions (or even millions) of records in one single batch. Besides the locking issues, that you may partially solve with "read committed snapshot" in SQL 2005 and later (that will demand a lot of IO operations in the TempDB), you may have issues with the transaction logs that will probably grow very large even with simple recovery mode as everything is handled in one huge transaction. Try to keep transactions as short as possible. If you need to do this update when the system "online" it will take a lot longer due to the fact that you can't disable foreign key constraints and indexes. If you can take the system "offline", you can disable all the foreign keys to the table and all the indexes before the update and add them later.
I agree to what WilliamD and Håkan Winther says. But I'd also like to add: What update operations do you need to perform to billions of rows at once? Is this something that is regularely done? Please tell something more about the scenario. There are perhaps other design options to explore?