Scenario: A 100M record table A nonclustered index orver an integer column ordered forward. ( but could be a clustered index for any purpose).

Action: Set this record as inactive by either updating its value to -1 or 9999 For test purposes lets say 50% of the data will be updated.

I know that a huge update like 50% of the data will cause an index fragmentation.

The table ( when clustered ) and the index ( nonclustered ) stores data "ordered", lets say, from 0 to 10.

So, does updating it to -1 will cause all remaining values to be moved on the index and be slower then updating it to 9999 ( or max int size )?

What is better/faster update it to -1 or 9999 ? or it doen't matter ? AND if we consider the defragmentation of the index as a required step ?

Edit: A news question: Wont the update to the maximum value ( or 9999 ) affect further work with the table, for example, if 50% of the data is on the last pages of the table, any include/update will be in the middle of the data, so any pages after that could be affected...

Edit2: I performed the following tests:

```
set nocount on
go
drop table temp
select convert(int,id) as id into temp from BIG_TABLE
create index IDX_id on temp(id )
GO
dbcc dropcleanbuffers
dbcc freeproccache
set statistics io on
set statistics time on
update temp set id = -1
where id > 500000 and id < 999999999
select @@rowcount
set statistics io off
set statistics time off
declare @c int = 0
while @c < 100000
begin
insert temp(id) values ( @c )
select @c = @c + 1
end
select @c
set statistics io on
set statistics time on
ALTER INDEX IDX_id ON temp REORGANIZE
set statistics io off
set statistics time off
GO
drop table temp
select convert(int,registration_id) as id into temp from BIG_REGISTRATIONs
create index IDX_id on temp(id )
GO
dbcc dropcleanbuffers
dbcc freeproccache
set statistics io on
set statistics time on
update temp set id = 999999999
where id > -1 and id <= 500000
select @@rowcount
set statistics io off
set statistics time off
declare @c int = 500000
while @c < 600000
begin
insert temp(id) values ( @c )
select @c = @c + 1
end
select @c - 500000
set statistics io on
set statistics time on
ALTER INDEX IDX_id ON temp REORGANIZE
set statistics io off
set statistics time off
GO
```

But i got inconclusive results, the difference was in miliseconds on both update and reorganize index even when a lot of insertions appeared in the middle of the pages.

I am starting to guess there is no difference between minimum value ( or -1 ) and maximum value ( 9999 ) when updating an indexed column, What do you think ?