Good idea, but implementation raises questionmarks
I think the idea behind this is plausible but I wonder if all the assumptions are correct, these are my questions/reservations etc:
The mirror problem, there is nothing that prevents a large site from verifying its mirrors and update its web site dynamically. There is nothing from preventing them to dynamically only present a subset of all mirrors at any given time and by doing so creating a form of load sharing. Even if this would be a site specific implementation it could work similar to how multiple dns records work to ease load on large internet sites. In fact, if you could get your http/ftp mirrors to agree on a common directory structure you could create the loadsharing this way for downloads only.
The P2P (read BitTorrent) problem and the no seeds argument is pretty much void for anyone distributing their own content in this way. If I choose to distribute my project via BitTorrent I of course ensure that I myself is always seeding.
Another problem is that in order for segmented downloads to work you put a lot of pressure on client implementations. I cannot see how you could possibly successfully mix a BitTorrent download and a FTP download unless the client itself implements both of these protocols.
Servers need to support segmented uploads, at least not all FTP servers do as far as my knowledge is correct. Clients needs to handle this as well.
The single point of failure argument is only true if the site serving the metalink itself is redundant, not having access to the metalink is just as much a problem as broken mirrors are.
It seems the proposed solution is a quite complex and therefor I remain skeptical about its success.
I also have some suggestions for you.
You may want to include a preference parameter between different protocols, as I understand it now the preference parameter is used only to choose between mirrors of same type.
You should start developing a metalink library in various languages to be used for interpreting these links aswell as doing the downloading. This way it seems to me client acceptance would be easier to achieve.
Above is unless you intend to actually create and distribute a metalink client which could be launched for instance by a web browser when it downloads a given metalink.
Anyways, its nice to see new refreshing ideas :-)
Diverse enviroments cannot support 1 does all
I think that one has to accept that the word NMS may mean different things depending on where in the chain of EM's (element managers) you are.
For instance the top-layer NMS handling alarms, which in a large network handles well above 100k alarms on average per day is complicated enough as it is.
Once you get this big, the configuration management system or the reporting system will be a system in itself, even if labeled as the same product simply because the number of different equipment you have will have.
The key is interoperability between systems and this is what should be achieved, trying to make a system which can do all will give no benefit but headache once you need to extend it to support a new kind of element.
What I am saying is, according to me, the OSS systems which are today are fine, and if anybody wants to take this further it should be done by making for instance a reporting system which can easily be configured to get data from different sources like RRD/SQL databases and then present the information structured and in a generic way to the user.
The same would apply to the alarmhandling system, focus should be on handling alarms in a generic way, dont try to interface all different kinds of equipment and verify that they do work, there are plenty of systems allready doing this excellently. Just try to recieve alarms which somebody sends, interpret the language and the message and present this in a generic fashion to the operator.