summaryrefslogtreecommitdiffstats
path: root/meta/recipes-core/util-linux
diff options
context:
space:
mode:
authorRichard Purdie <richard.purdie@linuxfoundation.org>2014-07-25 14:54:23 +0100
committerRichard Purdie <richard.purdie@linuxfoundation.org>2014-07-26 08:50:14 +0100
commit89d178841208557b030103cf0ae813a42550487c (patch)
tree8fee8247d56c36cf28a5b36671725e01325eb58f /meta/recipes-core/util-linux
parenta05435fc59d32f2fcf4ea4185cb0655eeb343211 (diff)
downloadpoky-89d178841208557b030103cf0ae813a42550487c.tar.gz
bitbake: codeparser cache improvements
It turns out the codeparser cache is the bottleneck I've been observing when running bitbake commands, particularly as it grows. There are some things we can do about this: * We were processing the cache with "intern()" at save time. Its actually much more memory efficient to do this at creation time. * Use hashable objects such as frozenset rather than set so that we can compare objects * De-duplicate the cache objects, link duplicates to the same object saving memory and disk usage and improving speed * Using custom setstate/getstate to avoid the overhead of object attribute names in the cache file To make this work, a global cache was needed for the list of set objects as this was the only way I could find to get the data in at setstate object creation time :(. Parsing shows a modest improvement with these changes, cache load time is significantly better, cache save time is reduced since there is now no need to reprocess the data and cache is much smaller. We can drop the compress_keys() code and internSet code from the shared cache core since its no longer used and replaced by codeparser specific pieces. (Bitbake rev: 4aaf56bfbad4aa626be8a2f7a5f70834c3311dd3) Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Diffstat (limited to 'meta/recipes-core/util-linux')
0 files changed, 0 insertions, 0 deletions