wdb:developers:optimizing_wci

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
wdb:developers:optimizing_wci [2008-12-10 10:41:57]
michaeloa created
wdb:developers:optimizing_wci [2022-05-31 09:29:32] (current)
Line 1: Line 1:
 ====== Optimizing WCI ====== ====== Optimizing WCI ======
  
-  * Optimizing of Query Plan +===== Optimizing of Query Plan ===== 
-  Optimizing of Data Reads + 
-  Distributed Database Cluster +===== Optimizing of Data Reads ===== 
-  * Run ANALYZE manually after loading operations + 
-    * Normally ANALYZE is run with the VACUUM ANALYZE, but this may take a while (VACUUM takes a long time). Running ANALYZE immediately cuts down on that time. +===== Distributed Database Cluster ===== 
-  * //memcache the WCI backend// + 
-    The performance data we are seeing indicate that the database is not effectively using the memory it has assigned. We see no significant difference, for instance, between performance of a machine with 8GB and one with 16GB of RAM. The idea here would be - basically - to force reuse of RAM for the database installation. A suggested implementation is given below: + 
-      * Set up memcached (or an alternatively main-memory system, but memcached is probably best). + 
-      * Hash Large Object segments into memcached. Since our LOs are immutable (at least in theory - we need to do something about this in practice for WDB), we can simply implement the following for wci.read: +===== WCI backend memcache ===== 
-        * If OID in memcached, retrieve object from memcached and extract values + 
-        * If OID is not memcached, retrieve object from database - extract data and cache OID +The performance data we are seeing indicate that the database is not effectively using the memory it has assigned. We see no significant difference, for instance, between performance of a machine with 8GB and one with 16GB of RAM. The idea here would be - basically - to force reuse of RAM for the database installation. A suggested implementation is given below: 
-      Postgres support for memcached exists: [[http://pgfoundry.org/projects/pgmemcache/|pgmemcache]] + 
-      * [[http://www.danga.com/memcached/|memcached]] exists in debian+  * Set up memcached (or an alternatively main-memory system, but memcached is probably best). 
 +  * Hash Large Object segments into memcached. Since our LOs are immutable (at least in theory - we need to do something about this in practice for WDB), we can simply implement the following for wci.read: 
 +    * If OID in memcached, retrieve object from memcached and extract values 
 +    * If OID is not memcached, retrieve object from database - extract data and cache OID 
 + 
 +Postgres support for memcached exists: [[http://pgfoundry.org/projects/pgmemcache/|pgmemcache]][[http://www.danga.com/memcached/|memcached]] exists in debian
 + 
 +Pros: Will likely reduce dependency on disk speed even further, since a large proportion of the queries hit the same OID's repeatedly. 
 + 
 +Cons: Currently (Dec 2008), WDB appears to be CPU-bound for large workloads. Adding memcached does not solve these problems.
  
  • wdb/developers/optimizing_wci.1228905717.txt.gz
  • Last modified: 2022-05-31 09:23:29
  • (external edit)