LAM/MPI is an implementation of the Message Passing Interface (MPI) parallel standard that is especially friendly to clusters. It includes a persistent runtime environment for parallel programs, support for all of MPI-1, and a good chunk of MPI-2, such as all of the dynamic functions, one-way communication, C++ bindings, and MPI-IO.
Parallel Bladeenc is a true parallel version of the Bladeenc MP3 encoder; it distributes work across CPUs to speed up MP3 encoding. It uses the Message Passing Interface (MPI) for parallelization across SMPs and/or multiple machines. Hence, if you have a 4-way SMP, you can encode your MP3s about 4 times as fast as the regular Bladeenc; if you have two 4-way SMPs, you can encode about 8 times as fast.
The Password Storage and Retrieval (PSR) system is a supplement to OpenPBS that allows PBS jobs to run with AFS authentication. It does this by strongly encrypting a user's password so the PBS server can retrieve and decrypt it when the user's job is run. The decrypted password is used to obtain an AFS token, allowing the job to run with AFS authentication. A "shepherd" process is forked into the background to renew the AFS token periodically to ensure that the token never expires while the job is still running.
This is a stable version of the Interoperable Message Passing Interface (IMPI) Server. It fully conforms to the January 2000 IMPI Draft Standard. IMPI is used to join multiple MPI implementations into a single parallel job. This package is independent of any specific MPI or IMPI implementation, and is intended for end users as well as IMPI implementors.
Open MPI is a project that originated as the merging of technologies and resources from several other projects (FT- MPI, LA-MPI, LAM/MPI, and PACX-MPI) in order to build the best MPI library available. A completely new MPI-2 compliant implementation, Open MPI offers advantages for system and software vendors, application developers, and computer science researchers. It is easy to use, and runs natively on a wide variety of operating systems, network interconnects, and batch/scheduling systems.
PLPA originated as an attempt to solve the problem of multiple APIs for processor affinity within Linux, but has since grown into a Linux processor affinity toolkit. It provides a Linux distro/kernel/glibc-independent C API for setting and getting processor affinity, and in newer kernels on supported platforms, it also supports mapping (core, socket) tuples to Linux virtual processor IDs. The plpa-taskset command effectively provides command-line access to the C API, and can be used to get/set processor affinity for new or already-running processes. Affinity can be expressed either as a set of Linux virtual processor IDs or (core, socket) tuples.
hwloc provides command line tools and a C API to obtain the hierarchical map of key computing elements, such as: NUMA memory nodes, shared caches, processor sockets, processor cores, and processor "threads". hwloc also gathers various attributes such as cache and memory information, and is portable across a variety of different operating systems and platforms. hwloc primarily aims at helping high-performance computing (HPC) applications, but is also applicable to any project seeking to exploit code and/or data locality on modern computing platforms.
Agree and disagree
I both agree and disagree.
This is from the perspective of a software developer -- someone who has to *write* the installers.
Disagree: I'm don't think that most users should build/install software. This is one of the Big Problems with Windows and Mac on the desktop, right? That users would install software just about anywhere, making it a complete nightmare for a sysadmin (and the user themselves!). And what about the additional disk resources? What if every user compiled their own version of package X? Why have a copy for every user? (try convincing an enterprise-wide IT manager that the file server needs another 60GB RAID array because users need their own copies of software)
I claim that it's the sysadmin's job to install software in a central location that everyone can use. Sure, users can install their own software for testing (under their $HOME or something), and perhaps some esoteric packages that only they will use, but most software packages that are worth installing are suitable for general consumption (at least for purposes of this discussion :-). Indeed, as a sysadmin myself, I will rarely try to fix a software package that a user has self-installed; I will likely install the package myself and have them use that one; it frequently solve most problems (in my experience).
Installing software is a complex task. There are frequently many different variables other than just compiling and deciding where to place the binary/man/config files. Hence, even a "just click here!" solution is not sufficient (e.g., what if there are configuration options that I need to specify before it compiles? And what if those options are *complicated*? Even if the user "just clicks here", they still have to answer those questions, which they may or may not know how to do properly). That's why there are sysadmins -- to do these kinds of things.
And I would expect that sysadmins know how to read the INSTALL file, "./configure; make all install", or whatever else is takes to configure and install the app (granted, some packages make it easier than others -- see my "Agree" points below).
There is the argument, however, "what about home computers? There is no sysadmin there." Yes, with my parents, I have to painfully walk them through windows installers (do you think I would have my parents run Linux? It's not *that* ready for prime-time yet...); even the "simplest" (in my computer-geek view) questions in the install wizards will confuse my parents because they don't know or care what it means -- as they shouldn't. They're users -- they want to *use* the computer. They don't need to know how it works. After all, a computer is a labor-saving device, right? ;-)
If someone is installing their own Linux (BSD, whatever) system, they they're signing up for some pain. The current state of things is that Linux/etc. is *not* click-click-click, it's installed (like Windoze). I still stand by my claim -- if you install Linux/etc. at home, you're signing yourself up to be a sysadmin, and are therefore responsible for the learning curve that comes with it.
Agree: the current generation of generally accepted configure/build/install systems suck.
Before you flame me with cries of "Use autoconf/automake/libtool!", let me say that they are fully functional tools, but they still suck. I use them heavily in my software projects, but if you've every tred using autoconf/automake for anything more than a trivial project, you know what a nightmare they can be.
First off, all three are way out of date. The only part that keeps getting updated is config.[sub|guess].
Autoconf: Debugging autoconf scripts, and writing *truly* portable autoconf tests is just as hard as writing the package that you're trying to distribute. That is, you effectively have two write *two* packages: your cool software itself, and a whole separate set of autoconf tests. Yes, autoconf has a bunch of built-in tests, but they typically don't cover all the things that a large, complex software package needs. For example:
- does the system have a prototype for gethostbyname()?
- is sa_len in struct sockaddr?
- does the C++ compiler use template repositories, and if so, what is the name of the directory that it uses?
The list is endless. This is why every decent-sized software package has a large, complex configure.in (possibly with a bunch of extra .m4 files). This sucks. I just want to release my software -- I don't want to have to debug configure scripts ad nausaum. Additionally (IMHO), most programmers don't know how to write good configure scripts. Look at many packages here on Freshmeat -- they work great on Linux, but not on any other flavor of unix.
These are are a few reasons why I think autoconf is a framework that somewhat helps, but it's not enough.
Automake: automake is nice -- it gives you a whole boatload of automatic targets, including my personal favorite: uninstall. It does a good job on most things like install, uninstall, etc. But I liken automake to most microsoft products: it's very easy to get simple projects going. To do anything more interesting (e.g., large, complex software packages), you have to dive deep into its [lack of] documentation, do oodles of empirical testing to figure out how it *really* works, and then workaround its bugs. automake has some *serious* drawbacks. Here's a few:
- the "depend" target is gcc-specific
- if there's no PROGRAMS target in a given directory, you can't make a convenience library (e.g., make a library with the .o's from a list of subdirectories)
- if there's no source files in a given directory, the "tags" target will fail
- the delicate timestamp dependency between all of automake's generated files is very easy to break and cause unexepected side-effects (try importing a released 3rd-party automake-ized package into your CVS tree, for example)
libtool: again, on the surface, libtool is very nice. It allows you to make shared and static libraries with ease. Particularly when paired with automake. But it also has some serious drawbacks:
- no clean support for making a single library from source files from multiple directories
- no support for C++ libraries (try with any modern C++ compiler other than g++)
- way out of date; making shared libraries on AIX 4.3.3 doesn't work (for example)
All this being said, autoconf/automake/libtool are currently the best tools out there. They suck, but they suck the least among the others, and are generally widely accepted. Hence, we programmers have to use them. The fact of the matter is that when used properly, they give a fully functional (and clean) install and uninstall package. And we sysadmins like that -- trust me. Contrary to what someone said above, installing is not just a matter of "mkdir foo; cp ... foo", and uninstalling is not just a matter of "rm -rf foo". Installaing and uninstalling properly is a complex task.
I'd *love* for there to be something better. The software carpentry project tried to make something better, but I think the end results -- while they are a few steps in the right direction -- won't be a comprehensive solution (IMHO).
So, sorry, I digressed a bit off-topic here, but my main point of disagreement still stands: normal users shouldn't install most software. The syadmin should do that.