-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: integrate persistent cache with blockchain providers #1697
base: main
Are you sure you want to change the base?
Conversation
count: 198, | ||
// eslint-disable-next-line no-magic-numbers | ||
size: 0.3 * sizeOf1mb | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Values have been calculated with first populating data to cache running lace and then running this script and passing a 7 (mb) as a parameter representing total space expected to occupy by the 3 caches. I think we should run it for a few different wallet to see how their states project to those cache sizes.
script
const getSizesPerCollection = async (totalMaxSize) => {
const chainHistoryProviderMetadataKey = 'chain-history-provider-cache-metadata';
const inputReslverMetadataKey = 'input-resolver-cache-metadata';
const utxoProviderMetadataKey = 'utxo-provider-cache-metadata';
const getSize = async (metadataKey) => {
const { [metadataKey]: metadata } = await chrome.storage.local.get(metadataKey);
const bytes = await chrome.storage.local.getBytesInUse([...Object.keys(metadata), metadataKey]);
const itemsCount = Object.keys(metadata).length;
const sizeInMb = bytes / 1024 / 1024;
const sizeOfSingleItem = sizeInMb / itemsCount;
return {
sizeInMb,
sizeOfSingleItem
};
};
const chainHistorySize = await getSize(chainHistoryProviderMetadataKey);
const inputResolverSize = await getSize(inputReslverMetadataKey);
const utxoSize = await getSize(utxoProviderMetadataKey);
const totalSize = chainHistorySize.sizeInMb + inputResolverSize.sizeInMb + utxoSize.sizeInMb;
const targetSizeOfChainHistory = totalMaxSize * (chainHistorySize.sizeInMb / totalSize);
const targetSizeOfInputResolver = totalMaxSize * (inputResolverSize.sizeInMb / totalSize);
const targetSizeOfUtxo = totalMaxSize * (utxoSize.sizeInMb / totalSize);
const targetCountOfChainHistory = targetSizeOfChainHistory / chainHistorySize.sizeOfSingleItem;
const targetCountOfInputResolver = targetSizeOfInputResolver / inputResolverSize.sizeOfSingleItem;
const targetCountOfUtxo = targetSizeOfUtxo / utxoSize.sizeOfSingleItem;
return {
chainHistory: {
count: targetCountOfChainHistory,
size: targetSizeOfChainHistory
},
inputResolver: {
count: targetCountOfInputResolver,
size: targetSizeOfInputResolver
},
utxo: {
count: targetCountOfUtxo,
size: targetSizeOfUtxo
}
};
};
Allure Report
processReports: ✅ test report for edb88064
|
Great Work! Left one comment/suggestion* |
size: 0.06 * sizeOf1mb | ||
}, | ||
[CacheName.utxoProvider]: { | ||
count: 198, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would it make sense to check for the items with min/max size in each collection, right now you do the average?
would be great to check on #1656 :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, right now I calculate an average size of item (collection size / items count) and then check how many such items would fit in the target collection size. What is your suggestion?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the context, the count represents a max count of items for a collection, and it is a fallback - primarily, cache logic will attempt to read the size in bytes, but there is a small chance this API won't be available.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@szymonmaslowski why do we need count
in addition to size
? Edit: ah, got it - it's a fallback in the SDK method 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess it would be better if the SDK did a rough calculation based on the stored data itself instead of taking the optional count param 🤔 something like this should do the trick well enough:
new TextEncoder().encode(
Object.entries(await browser.storage.sync.get())
.map(([key, value]) => key + JSON.stringify(value))
.join('')
).length
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just thinking if there could be a case that size of an item is actually way bigger than we've initially calculated, so we end up having to small amount of items in the cache, and would be constantly just replacing items there. Feel free to ignore this comment though :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The context here is that default storage space is 10MB, but it can be increased to unlimited but requires requesting additional permission, which will most likely increase review time of new submitted extension version. The idea was to go with 10MB this release and increase for the next release.
@DominikGuzei, that's a nice suggestion! I would definitely go for it, but there is a requirement that we don't want to load the whole cache to memory. Having unlimited storage we will probably aim for more size per cache collection.
@vetalcore Considering having the unlimited storage size we will also increase the limits per collection so the problem you described will be even less likely to happen.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code LGTM 🚢 the only potential improvement i can see is that approximation of count
being used for the cache (due to SDK limitations), which might be better solved by the SDK doing some rough "bytes in use" calculation itself. But that's just a side note :)
size: 0.06 * sizeOf1mb | ||
}, | ||
[CacheName.utxoProvider]: { | ||
count: 198, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just thinking if there could be a case that size of an item is actually way bigger than we've initially calculated, so we end up having to small amount of items in the cache, and would be constantly just replacing items there. Feel free to ignore this comment though :)
41ef553
to
1ed1333
Compare
1ed1333
to
edb8806
Compare
|
Checklist
Proposed solution
Enabling persistent cache for:
Testing
Initial loading of lace is expected do make following blockfrost requests:
txs/${hash}/cbor
txs/${txId}
txs/${txIn}/utxos
When particular cache size reaches the max size defined here, then 10% of items with oldest access time should be removed.