On Dec 7, 2009, at 9:37 AM, devzero2000 wrote:
> On Mon, Dec 7, 2009 at 2:55 PM, Jeff Johnson <email@example.com> wrote:
>> On Dec 7, 2009, at 7:47 AM, devzero2000 wrote:
>>> High performance computing systems are very popular for some time. The
>>> problems of Hign Avalibility computer systems are common in the same
>>> The question is how a package management system as rpm5 can address
>>> the problems of such environments. I have not found any reference to
>>> such issues, in general systems package management system, as rpm5.
>>> Overall this system are starting from the a simple assumption : a
>>> single system and a single db metadata (dpkg have not a real db
>>> however). But this assumption is wrong on a system of HPC: in general,
>>> the applications are installed in the absence of a true package but
>>> are installed manuallu on a file or network distributed systems: NFS,
>>> GFS2, Luster for example. The problem, from my point of view, is that
>>> applications are not installed using a package system like rpm5 but
>>> installed manually: anyone thinks at this point it is sufficient to
>>> create a virtual package with only "requires" for issue like update
>>> conflict or the like and it is difficult to prove the opposite: Why
>>> should I install the same package separately on multiple nodes where
>>> the package is the same and it is installed on the same place (on a
>>> distributed or network filesystem). I have the opinion that a
>>> distributed system requires a rpm5 metadata distributed database and
>>> the fact that rpm5 includes a relational (or a sort of it in the
>>> latest incarnation of berkeley db) database system like the model is
>>> certainly an advantage - this what Iof a advocate of the relational
>>> model as Chris Date tell about this issue, last time i have checked.
>>> On the pragmatic view, specifically, assuming the same (as patch
>>> version) 100 nodes should be possible to extend / var / lib / rpm /
>>> Packages with a shared rpm5 Packages (extending _db_path for example
>>> ) on which should be able to act as a fragment of Packages (an Union
>>> of Packages if you like ) and if this it is unavailable, well no
>>> problem. The preceding are only a personal opinion. There are other
>>> opinions? I have perhaps missing something ?
>> HPC is usually focussed on scaling, installing identical software
>> on many nodes efficiently.
>> Distributing system images with modest per-node customization tends to be
>> simpler than per-node package management. Package management is useful for
>> constructing the system images. But PM cannot compete with system images
>> for installation scaling to multiple nodes.
> First of all, thanks for your reply. But i disagree on this point : it
> would be like saying that cloning is more useful than using conga and
> puppet (or kickstart FWIW) and here I disagree.
Well let's decompose the above statements into pieces to identify where
"Distributing system images ... tends to be simpler ..." and "cloning".
From a purely implementation POV, a package manager will always have more
overhead than blasting content onto physical media. The overhead introduced
by kernels and file systems and libraries and applications is
eliminated if physical images are distributed.
"... modest per-node customization ... PM cannot compete" and "using conga/puppet/kickstart"
package != configuration management is likely the crux of the disagreement.
I don't believe RPM is very good at "configuration management", which
is better handled by kickstart or puppet or conga or Augeas. Most
attempts at CM in *.rpm are through scriptlets, with known deficiencies.
I would claim that scriptlets are the single largest cause of upgrade
failures today. Whether "single largest" or "one of the largest" is
hardly worth discussing.
So I suspect we differ in package vs. configuration management assumptions.
>> Doing upgrades of multiple nodes is typically done by creating a new
>> system image, and then undertaking a reinstallation of the new system
>> image. This isn't as efficient as upgrading a package on a per-node basis
>> because new system images will contain redundant already installed
>> software. Its very hard to beat a reboot of a new system image located
>> on a distributed file system for KISS efficiency.
>> Tracking what system image is installed back to a specific PM database
>> that describes the installed software within the system image could
>> be done with a wrapper on rpm to choose 1-of-N rpmdb's to perform
>> detailed queries re files in the system image. But a flat file manifest
>> of what packages were installed in a system image is likely sufficient
>> for most purposes as well.
> But THIS make it useless or worse, the role of a package managemement
> system, let it call call RPM5 or other.
> Are you sure ?
Not sure about anything. What I described is based on an assumption
that physical images produced by a package manager are what could
be distributed. What is "THIS" and why is it "useless"?
>> A distributed PM (or system image) database using some RPC transport is
>> fairly simple. Since installed software is slowly changing, and mostly
> It is an opinion. Security system patch are DAILY.
>> readonly after system images are created, the RPC performance
>> is likely not critical. Berkeley DB supplied sunrpc until db-4.8.24. Other
>> RPC transports onto Berkeley DB are no harder than sunrpc.
>> The above probably (imho) describes a reasonable architecture that scales efficiently
>> for maintaining software on most of the nodes in a HPC "cluster".
>> There's still a need for fault tolerance on the management server(s)
>> where images are resident and where images are produced that need
>> more than readonly access to databases. The management servers would
>> likely benefit from a replicated database (which Berkeley DB can
>> One can imagine an architecture using replicated databases across
>> all nodes, with full ACID transactional properties on not only the
>> database, but also with packages and files. But the complexity
>> cost, and the scaling to many nodes, likely has combinatorial
>> failures. There are other efficiencies, like multicast transport,
>> and a reliable message bus (like dbus) that would likely be needed
>> as well.
> As I replied, your answer seems to reiterate that a package management
> system is not useful in HPC ENVIRONMENT. But I do not agree. These is
> because a package management system involves, or is a necessary
> substrate, for software distribution and patch management. But the
> your last reply it is interesting, although it deserves further
There's likely a further disagreement here in package vs patch management.
The one attempt I'm aware of to integrate patch management into RPM
(from SuSE) has been largely deprecated.
I can go into details re why I believe the SuSE patch management did not
succeed (there's nothing wrong with the patch into rpm), but basically
Packages as containers for immutable files is where package management "works".
The corollary is that mutable files, either through configuration/patch management,
or for files that aren't contained in packages, doesn't work very well with RPM.
>> hth random opinions from 5 minutes of thought about
> HPC, HA, shared storage and RPM probably require further reflection.
> IMHO they are not been mentioned in the past is probably due to the
> fact that many applications (user application not system) are
> installed manually and they have not considered the benefits to use a
> package management system for their applications
Again, please take my comments as a result of 5 minutes of thought.
There are other architectures, and additional implementations, that
would be needed for RPM to be successful in managing HPC software
For starters, there's little reason (aside from silly 32bit constraints
imposed on files and payloads with existing), why system image
or network appliance or DVD or ... management could not be done in
*.rpm. So far there's been little interest in attempting *.rpm
extensions in those directions, largely because of the known
73 de Jeff
Received on Mon Dec 7 16:19:55 2009