Indexes represent the data in the underlying tables. As data is modified, added and deleted in tables, then the physical structure of indexes becomes fragmented. The more fragmentation then the bigger the impact on performance of those queries that use those indexes. Typically this manifests itself as greater amounts of disk IO. Rebuilding the index removes the fragmentation. You will also gain additional benefit, as an index rebuild will update the statistics of index, again improving query performance by giving the optimizer a better chance of choosing a more optimal execution plan. However it's not necessary to rebuild an index to update the statistics - this can be done independently.
As the data in indexes changes, the location and distribution of the data on the pages that are used to store it on disk changes. As data gets inserted, there may no longer be enough room on a disk, so a new page is created and some data is moved to that. It's called a page split. As data gets deleted, it's removed from existing pages, creating blank spots. As data gets updated, it can be moved, effectively a delete, creating space, and an insert, which can lead to page splits. And this goes on all the time. Over time, this can lead the index to be stored all over the disk, with big gaps between the pages. So a rebuild acts to move the index closer together in terms of storage eliminating all these gaps and empty spaces, making the storage neater, and therefore quicker to access. Additionally, during an index rebuild, the statistics are completely rebuilt using a full scan against the data. Unless you have manual statistics maintenance in place, this generally never occurs. Instead you get sampled updates based on the amount of data changed within a table. But the sampled updates are not as accurate as the full scan and you get this from the index rebuild.