Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: integrate persistent cache with blockchain providers #1697

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

szymonmaslowski
Copy link
Contributor

Checklist

  • JIRA - <link>
  • Proper tests implemented
  • Screenshots added.

Proposed solution

Enabling persistent cache for:

  • chain history provider
  • input resolver
  • utxo provider

Testing

Initial loading of lace is expected do make following blockfrost requests:

  • txs/${hash}/cbor
  • txs/${txId}
  • txs/${txIn}/utxos

When particular cache size reaches the max size defined here, then 10% of items with oldest access time should be removed.

@szymonmaslowski szymonmaslowski requested a review from a team as a code owner February 6, 2025 16:33
count: 198,
// eslint-disable-next-line no-magic-numbers
size: 0.3 * sizeOf1mb
}
Copy link
Contributor Author

@szymonmaslowski szymonmaslowski Feb 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Values have been calculated with first populating data to cache running lace and then running this script and passing a 7 (mb) as a parameter representing total space expected to occupy by the 3 caches. I think we should run it for a few different wallet to see how their states project to those cache sizes.

script
const getSizesPerCollection = async (totalMaxSize) => {
  const chainHistoryProviderMetadataKey = 'chain-history-provider-cache-metadata';
  const inputReslverMetadataKey = 'input-resolver-cache-metadata';
  const utxoProviderMetadataKey = 'utxo-provider-cache-metadata';

  const getSize = async (metadataKey) => {
    const { [metadataKey]: metadata } = await chrome.storage.local.get(metadataKey);
    const bytes = await chrome.storage.local.getBytesInUse([...Object.keys(metadata), metadataKey]);
    const itemsCount = Object.keys(metadata).length;
    const sizeInMb = bytes / 1024 / 1024;
    const sizeOfSingleItem = sizeInMb / itemsCount;

    return {
      sizeInMb,
      sizeOfSingleItem
    };
  };

  const chainHistorySize = await getSize(chainHistoryProviderMetadataKey);
  const inputResolverSize = await getSize(inputReslverMetadataKey);
  const utxoSize = await getSize(utxoProviderMetadataKey);

  const totalSize = chainHistorySize.sizeInMb + inputResolverSize.sizeInMb + utxoSize.sizeInMb;

  const targetSizeOfChainHistory = totalMaxSize * (chainHistorySize.sizeInMb / totalSize);
  const targetSizeOfInputResolver = totalMaxSize * (inputResolverSize.sizeInMb / totalSize);
  const targetSizeOfUtxo = totalMaxSize * (utxoSize.sizeInMb / totalSize);

  const targetCountOfChainHistory = targetSizeOfChainHistory / chainHistorySize.sizeOfSingleItem;
  const targetCountOfInputResolver = targetSizeOfInputResolver / inputResolverSize.sizeOfSingleItem;
  const targetCountOfUtxo = targetSizeOfUtxo / utxoSize.sizeOfSingleItem;

  return {
    chainHistory: {
      count: targetCountOfChainHistory,
      size: targetSizeOfChainHistory
    },
    inputResolver: {
      count: targetCountOfInputResolver,
      size: targetSizeOfInputResolver
    },
    utxo: {
      count: targetCountOfUtxo,
      size: targetSizeOfUtxo
    }
  };
};

@pczeglik-iohk
Copy link
Contributor

pczeglik-iohk commented Feb 6, 2025

Allure Report

allure-report-publisher generated test report!

processReports: ✅ test report for edb88064

passed failed skipped flaky total result
Total 33 0 4 0 37

@vetalcore
Copy link
Contributor

Great Work! Left one comment/suggestion*

size: 0.06 * sizeOf1mb
},
[CacheName.utxoProvider]: {
count: 198,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would it make sense to check for the items with min/max size in each collection, right now you do the average?
would be great to check on #1656 :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, right now I calculate an average size of item (collection size / items count) and then check how many such items would fit in the target collection size. What is your suggestion?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the context, the count represents a max count of items for a collection, and it is a fallback - primarily, cache logic will attempt to read the size in bytes, but there is a small chance this API won't be available.

Copy link
Member

@DominikGuzei DominikGuzei Feb 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@szymonmaslowski why do we need count in addition to size? Edit: ah, got it - it's a fallback in the SDK method 👍

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess it would be better if the SDK did a rough calculation based on the stored data itself instead of taking the optional count param 🤔 something like this should do the trick well enough:

new TextEncoder().encode(
  Object.entries(await browser.storage.sync.get())
    .map(([key, value]) => key + JSON.stringify(value))
    .join('')
).length

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just thinking if there could be a case that size of an item is actually way bigger than we've initially calculated, so we end up having to small amount of items in the cache, and would be constantly just replacing items there. Feel free to ignore this comment though :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The context here is that default storage space is 10MB, but it can be increased to unlimited but requires requesting additional permission, which will most likely increase review time of new submitted extension version. The idea was to go with 10MB this release and increase for the next release.

@DominikGuzei, that's a nice suggestion! I would definitely go for it, but there is a requirement that we don't want to load the whole cache to memory. Having unlimited storage we will probably aim for more size per cache collection.

@vetalcore Considering having the unlimited storage size we will also increase the limits per collection so the problem you described will be even less likely to happen.

@DominikGuzei DominikGuzei self-requested a review February 7, 2025 09:58
Copy link
Member

@DominikGuzei DominikGuzei left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code LGTM 🚢 the only potential improvement i can see is that approximation of count being used for the cache (due to SDK limitations), which might be better solved by the SDK doing some rough "bytes in use" calculation itself. But that's just a side note :)

size: 0.06 * sizeOf1mb
},
[CacheName.utxoProvider]: {
count: 198,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just thinking if there could be a case that size of an item is actually way bigger than we've initially calculated, so we end up having to small amount of items in the cache, and would be constantly just replacing items there. Feel free to ignore this comment though :)

@szymonmaslowski szymonmaslowski force-pushed the feat/LW-12228-improve-caching-mechanism branch 2 times, most recently from 41ef553 to 1ed1333 Compare February 7, 2025 16:24
@szymonmaslowski szymonmaslowski force-pushed the feat/LW-12228-improve-caching-mechanism branch from 1ed1333 to edb8806 Compare February 10, 2025 09:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants