On Jun 29, 2011, at 9:59 AM, Tim Mooney wrote:
> In regard to: Re: RPM DB, Berkley DB and journal files, Jeff Johnson said...:
>> Meanwhile: Do you agree that increasing mmap size is the most important
>> performance related tunable?
> First, although I have indeed learned a lot about BDB from OpenLDAP, that
> still doesn't mean I know much. ;-) I went from knowing nothing about
> it to knowing just a little, and most of that specific to how OpenLDAP
> uses BDB. You're still the expert, not me.
Well I'm not the expert by choice:
She LART's me for every picayune flaw in RPM+BDB …
and the experience has developed a fair amount of scar tissue
over the years …
> That said, if you're going to use mmap at all, then I trust you're correct
> about mmap size being the most important tunable.
> However, there has been considerable discussion on the OpenLDAP mailing
> list about how on certain platforms, using mmap for the BDB env is much,
> much slower than using System V IPC shared memory. For example, on
> Solaris using shared memory is an order of magnitude faster than using
> mmap for the env:
> Howard Chu has also stated that IPC shared memory is also beginning to
> be a performance win (at least for OpenLDAP) on recent Linux kernels too,
> though I don't believe it's the order of magnitude difference you see
> I can find more references to the issue with some more digging, but
> basically, for at least how OpenLDAP uses BDB, System V IPC can be
> a big win.
Thank you for the current pointers. I cruise the openldap
lists every 6 months or so looking for "real world" experience
with Berkeley DB. I baby sit (as in run) 5-10 SKS key servers
for a similar purpose, seeing what issues there are in using
Berkeley DB on a large scale persistently …
BTW, I seem to recall an earlier post, also from Howard Chu,
about one of the tunables that led to ~11K transactions/second.
I can dig out the reference from a few years back if interested …
… all I recall atm is that I had to read and study the posting
for several days to do the same in "RPM ACID", but have forgotten
the details even if not the (now ~2 year old) experience.
> That doesn't necessarily mean it's a good idea for RPM in general,
> or RPM on an embedded platform in particular. It would get rid of
> most of the __db* files that the original bug report mention, though. ;-)
> Possibly at the expense of making RPM not work at all on that platform.
Yes: The usage case of Berkeley DB in OpenLDAP and RPM is rather different.
OpenLDAP is transaction oriented with high thru-put as the primary metric.
RPM isn't a daemon, and MUST drop-in on bare iron and empty chroot's
and Just Work to the greatest extent possible.
These primary goals dictate the tuning and -- till "RPM ACID" -- Berkeley DB
performance simply wasn't an issue, and is still mostly irrelevant. What
RPM users expect is a cushy ride with no human intervention whatsoever,
and so KISS is more appropriate than better designed schema or anything else.
>> I've diddled most of the tunables at some
>> point or another using rpm --stats as a measurement, and mmap'd I?O
>> is/was the most important tunable (disabling sync cannot be done in "production"
> Makes sense. Using RAM for the db env is always going to be faster than
> using disk.
I haven't really looked at the SysV IPC option with Berkeley DB, mostly
because I know that most RPM users simply cannot cope with
man 1 ipcs
from the command line, and SysV IPC has always been a bit painful to
manage when something breaks.
My immediate question with IPC would be:
What happens when there are multiple IPC instances instantiated in a chroot?
There's almost certainly some corner case isolation issues because
IPC likely leaks information from the outer to the inner chroot
quite predictably. (Disclaimer: I haven't bothered to look, just …)
But the general principle of
Memory is faster than disk.
applies, but one needs to also consider the locking issues from
multiple contentious access to an rpmdb, and from both outside/inside
a chroot, or some well meaning claim of higher perfuming WILL
lead to "corruption" quite predictably. E.g. there are no provisions
I'm aware of (and arguably shouldn't be) for sharing a DBENV between
ELF32/ELF64 multilib environments. RPM users "expect" interoperability
to Just Work everywhere and always in spite of the rather daunting
engineering difficulties sharing a DBENV between architectures.
73 de Jeff
Received on Wed Jun 29 17:45:32 2011