It seems to me, that the problem can be fixed with HTTP and some XML-Tags (or HTML-META-Tags). What is needed is the information:
What mirrors do exist
What mirrors should be used
On the other hand there is the What does the browser do with the information ?-problem. We had Mosaic that interpreted the LINK-element - others did not. The best way should be the development of a tag-set that could also be integrated in XHTML - I suppose there is one allready that gives the possibility to decide what do to next after the header of a document is retrieved. It is also possible to distinguish between stable and dynamic parts of a page (maybe only ads are dynamic). I don't think we need another new software - we need widely accepted protocols.
I like the description of the browser site in the article (like the "What's related") But I think buttons and menus should be more flexible (functions should be loadable dynamically - not build-in!) I see that the classical web-page comes to an end. Why shouldn't every web-page should have one pull-down-menu for it's contents? If an index is recognized by the browser he could display ist like he or the user wants. This would result in compatibility to every browser-type (even Lynx).
XML was invented because one would not implement new features and elements every time someone has an idea. It is time for the WWW, that this ideas get supported more effectively. And this will also mean: Not limited to BROWSERS at all!