Differences
This shows you the differences between two versions of the page.
wdb:developers:optimizing_wci [2008-12-10 10:50:34] michaeloa |
wdb:developers:optimizing_wci [2022-05-31 09:29:32] |
||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== Optimizing WCI ====== | ||
- | |||
- | ===== Optimizing of Query Plan ===== | ||
- | |||
- | ===== Optimizing of Data Reads ===== | ||
- | |||
- | ===== Distributed Database Cluster ===== | ||
- | |||
- | |||
- | ===== WCI backend memcache ===== | ||
- | |||
- | The performance data we are seeing indicate that the database is not effectively using the memory it has assigned. We see no significant difference, for instance, between performance of a machine with 8GB and one with 16GB of RAM. The idea here would be - basically - to force reuse of RAM for the database installation. A suggested implementation is given below: | ||
- | |||
- | * Set up memcached (or an alternatively main-memory system, but memcached is probably best). | ||
- | * Hash Large Object segments into memcached. Since our LOs are immutable (at least in theory - we need to do something about this in practice for WDB), we can simply implement the following for wci.read: | ||
- | * If OID in memcached, retrieve object from memcached and extract values | ||
- | * If OID is not memcached, retrieve object from database - extract data and cache OID | ||
- | |||
- | Postgres support for memcached exists: [[http:// | ||
- | |||
- | Pros: Will likely reduce dependency on disk speed even further, since a large proportion of the queries hit the same OID's repeatedly. | ||
- | Cons: Currently (Dec 2008), WDB appears to be CPU-bound for large workloads. Adding memcached does not solve these problems. | ||