If youβre in a hurry, feel free to jump straight to the demos.
- see SETUP for the installation/configuration guide
- see DEVELOPMENT for the development guide
- see DESIGN for the design goals
- see MODULES for module-specific setup
- see MODULE_DESIGN for some thoughts on structuring modules, and possibly extending HPI
- see exobrain/HPI for some of my raw thoughts and todos on the project
TLDR: Iβm using HPI (Human Programming Interface) package as a means of unifying, accessing and interacting with all of my personal data.
HPI is a Python package (named my
), a collection of modules for:
- social networks: posts, comments, favorites
- reading: e-books and pdfs
- annotations: highlights and comments
- todos and notes
- health data: sleep, exercise, weight, heart rate, and other body metrics
- location
- photos & videos
- browser history
- instant messaging
The package hides the gory details of locating data, parsing, error handling and caching. You simply βimportβ your data and get to work with familiar Python types and data structures.
- Hereβs a short example to give you an idea: βwhich subreddits I find the most interesting?β
import my.reddit.all from collections import Counter return Counter(s.subreddit for s in my.reddit.all.saved()).most_common(4)
orgmode 62 emacs 60 selfhosted 51 QuantifiedSelf 46
I consider my digital trace an important part of my identity. (#extendedmind) Usually the data is siloed, accessing it is inconvenient and borderline frustrating. This feels very wrong.
In contrast, once the data is available as Python objects, I can easily plug it into existing tools, libraries and frameworks. It makes building new tools considerably easier and opens up new ways of interacting with the data.
I tried different things over the years and I think Iβm getting to the point where other people can also benefit from my code by βjustβ plugging in their data, and thatβs why Iβm sharing this.
Imagine if all your life was reflected digitally and available at your fingertips. This library is my attempt to achieve this vision.
Table of contents:- Why?
- How does a Python package help?
- Why donβt you just put everything in a massive database?
- Whatβs inside?
- How do you use it?
- Ad-hoc and interactive
- What were my music listening stats for 2018?
- What are the most interesting Slate Star Codex posts Iβve read?
- Accessing exercise data
- Book reading progress
- Messenger stats
- Which month in 2020 did I make the most git commits in?
- Querying Roam Research database
- How does it get input data?
- Q & A
- Why Python?
- Can anyone use it?
- How easy is it to use?
- What about privacy?
- But should I use it?
- Would it suit me?
- What it isnβt?
- HPI Repositories
- Related links
- β
The main reason that led me to develop this is the dissatisfaction of the current situation:
- Our personal data is siloed and trapped across cloud services and various devices
Even when itβs possible to access it via the API, itβs hardly useful, unless youβre an experienced programmer, willing to invest your time and infrastructure.
- We have insane amounts of data scattered across the cloud, yet weβre left at the mercy of those who collect it to provide something useful based on it
Integrations of data across silo boundaries are almost non-existent. There is so much potential and itβs all wasted.
- Iβm not willing to wait till some vaporware project reinvents the whole computing model from scratch
As a programmer, I am in capacity to do something right now, even though itβs not necessarily perfect and consistent.
Iβve written a lot about it here, so allow me to simply quote:
- search and information access
- Why canβt I search over all of my personal chat history with a friend, whether itβs ICQ logs from 2005 or Whatsapp logs from 2019?
- Why canβt I have incremental search over my tweets? Or browser bookmarks? Or over everything Iβve ever typed/read on the Internet?
- Why canβt I search across my watched youtube videos, even though most of them have subtitles hence allowing for full text search?
- Why canβt I see the places my friends recommended me on Google maps (or any other maps app)?
- productivity
- Why canβt my Google Home add shopping list items to Google Keep? Let alone other todo-list apps.
- Why canβt I create a task in my todo list or calendar from a conversation on Facebook Messenger/Whatsapp/VK.com/Telegram?
- journaling and history
- Why do I have to lose all my browser history if I decide to switch browsers?
- Why canβt I see all the places I traveled to on a single map and photos alongside?
- Why canβt I see what my heart rate (i.e. excitement) and speed were side by side with the video I recorded on GoPro while skiing?
- Why canβt I easily transfer all my books and metadata if I decide to switch from Kindle to PocketBook or vice versa?
- consuming digital content
- Why canβt I see stuff I highlighted on Instapaper as an overlay on top of web page?
- Why canβt I have single βread it laterβ list, unifying all things saved on Reddit/Hackernews/Pocket?
- Why canβt I use my todo app instead of βWatch laterβ playlist on youtube?
- Why canβt I βfollowβ some user on Hackernews?
- Why canβt I see if Iβve run across a Youtube video because my friend sent me a link months ago?
- Why canβt I have uniform music listening stats based on my Last.fm/iTunes/Bandcamp/Spotify/Youtube?
- Why am I forced to use Spotifyβs music recommendation algorithm and donβt have an option to try something else?
- Why canβt I easily see what were the books/music/art recommended by my friends or some specific Twitter/Reddit/Hackernews users?
- Why my otherwise perfect hackernews app for Android doesnβt share saved posts/comments with the website?
- health and body maintenance
- Why canβt I tell if I was more sedentary than usual during the past week and whether I need to compensate by doing a bit more exercise?
- Why canβt I see whatβs the impact of aerobic exercise on my resting HR?
- Why canβt I have a dashboard for all of my health: food, exercise and sleep to see baselines and trends?
- Why canβt I see the impact of temperature or CO2 concentration in room on my sleep?
- Why canβt I see how holidays (as in, not going to work) impact my stress levels?
- Why canβt I take my Headspace app data and see how/if meditation impacts my sleep?
- Why canβt I run a short snippet of code and check some random health advice on the Internet against my health data.
- personal finance
- Why am I forced to manually copy transactions from different banking apps into a spreadsheet?
- Why canβt I easily match my Amazon/Ebay orders with my bank transactions?
- why I canβt do anything when Iβm offline or have a wonky connection?
- tools for thinking and learning
- Why when something like βmind palaceβ is literally possible with VR technology, we donβt see any in use?
- Why canβt I easily convert select Instapaper highlights or new foreign words I encountered on my Kindle into Anki flashcards?
- mediocre interfaces
- Why do I have to suffer from poor management and design decisions in UI changes, even if the interface is not the main reason Iβm using the product?
- Why canβt I leave priorities and notes on my saved Reddit/Hackernews items?
- Why canβt I leave private notes on Deliveroo restaurants/dishes, so Iβd remember what to order/not to order next time?
- Why do people have to suffer from Google Inbox shutdown?
- communication and collaboration
- Why canβt I easily share my web or book highlights with a friend? Or just make highlights in select books public?
- Why canβt I easily find out other personβs expertise without interrogating them, just by looking what they read instead?
- backups
- Why do I have to think about it and actively invest time and effort?
- Iβm tired of having to use multiple different messengers and social networks
- Iβm tired of shitty bloated interfaces
Why do we have to be at mercy of their developers, designers and product managers? If we had our data at hand, we could fine-tune interfaces for our needs.
- Iβm tired of mediocre search experience
Text search is something computers do exceptionally well. Yet, often itβs not available offline, itβs not incremental, everyone reinvents their own query language, and so on.
- Iβm frustrated by poor information exploring and processing experience
While for many people, services like Reddit or Twitter are simply time killers (and I donβt judge), some want to use them efficiently, as a source of information/research. Modern bookmarking experience makes it far from perfect.
You can dismiss this as a list of first-world problems, and you would be right, they are. But the major reason I want to solve these problems is to be better at learning and working with knowledge, so I could be better at solving the real problems.
When I started solving some of these problems for myself, Iβve noticed a common pattern: the hardest bit is actually getting your data in the first place. Itβs inherently error-prone and frustrating.
But once you have the data in a convenient representation, working with it is pleasant β you get to explore and build instead of fighting with yet another stupid REST API.
This package knows how to find data on your filesystem, deserialize it and normalize it to a convenient representation. You have the full power of the programming language to transform the data and do whatever comes to your mind.
Glad youβve asked! I wrote a whole post about it.In short: while databases are efficient and easy to read from, often they arenβt flexible enough to fit your data. Youβre probably going to end up writing code anyway.
While working with your data, youβll inevitably notice common patterns and code repetition, which youβll probably want to extract somewhere. Thatβs where a Python package comes in.
Hereβs the (incomplete) list of the modules:
=my.bluemaestro= | Bluemaestro temperature/humidity/pressure monitor |
=my.body.blood= | Blood tracking (manual org-mode entries) |
=my.body.exercise.all= | Combined exercise data |
=my.body.exercise.cardio= | Cardio data, filtered from various data sources |
=my.body.exercise.cross_trainer= | My cross trainer exercise data, arbitrated from different sources (mainly, Endomondo and manual text notes) |
=my.body.weight= | Weight data (manually logged) |
=my.calendar.holidays= | Holidays and days off work |
=my.coding.commits= | Git commits data for repositories on your filesystem |
=my.demo= | Just a demo module for testing and documentation purposes |
=my.emfit= | Emfit QS sleep tracker |
=my.endomondo= | Endomondo exercise data |
=my.fbmessenger= | Facebook Messenger messages |
=my.foursquare= | Foursquare/Swarm checkins |
=my.github.all= | Unified Github data (merged from GDPR export and periodic API updates) |
=my.github.gdpr= | Github data (uses official GDPR export) |
=my.github.ghexport= | Github data: events, comments, etc. (API data) |
=my.hypothesis= | Hypothes.is highlights and annotations |
=my.instapaper= | Instapaper bookmarks, highlights and annotations |
=my.kobo= | Kobo e-ink reader: annotations and reading stats |
=my.lastfm= | Last.fm scrobbles |
=my.location.google= | Location data from Google Takeout |
=my.location.home= | Simple location provider, serving as a fallback when more detailed data isnβt available |
=my.materialistic= | Materialistic app for Hackernews |
=my.orgmode= | Programmatic access and queries to org-mode files on the filesystem |
=my.pdfs= | PDF documents and annotations on your filesystem |
=my.photos.main= | Photos and videos on your filesystem, their GPS and timestamps |
=my.pinboard= | Pinboard bookmarks |
=my.pocket= | Pocket bookmarks and highlights |
=my.polar= | Polar articles and highlights |
=my.reddit= | Reddit data: saved items/comments/upvotes/etc. |
=my.rescuetime= | Rescuetime (phone activity tracking) data. |
=my.roamresearch= | Roam data |
=my.rss.all= | Unified RSS data, merged from different services I used historically |
=my.rss.feedbin= | Feedbin RSS reader |
=my.rss.feedly= | Feedly RSS reader |
=my.rtm= | Remember The Milk tasks and notes |
=my.runnerup= | Runnerup exercise data (TCX format) |
=my.smscalls= | Phone calls and SMS messages |
=my.stackexchange.gdpr= | Stackexchange data (uses official GDPR export) |
=my.stackexchange.stexport= | Stackexchange data (uses API via stexport) |
=my.taplog= | Taplog app data |
=my.time.tz.main= | Timezone data provider, used to localize timezone-unaware timestamps for other modules |
=my.time.tz.via_location= | Timezone data provider, guesses timezone based on location data (e.g. GPS) |
=my.twitter.all= | Unified Twitter data (merged from the archive and periodic updates) |
=my.twitter.archive= | Twitter data (uses official twitter archive export) |
=my.twitter.twint= | Twitter data (tweets and favorites). Uses Twint data export. |
=my.vk.vk_messages_backup= | VK data (exported by Totktonada/vk_messages_backup) |
Some modules are private, and need a bit of cleanup before merging:
my.workouts | Exercise activity, from Endomondo and manual logs |
my.sleep.manual | Subjective sleep data, manually logged |
my.nutrition | Food and drink consumption data, logged manually from different sources |
my.money | Expenses and shopping data |
my.webhistory | Browsing history (part of promnesia) |
Also, check out my infrastructure map. It might be helpful for understanding whatβs my vision on HPI.
Typical search interfaces make me unhappy as they are siloed, slow, awkward to use and donβt work offline. So I built my own ways around it! I write about it in detail here.In essence, Iβm mirroring most of my online data like chat logs, comments, etc., as plaintext. I can overview it in any text editor, and incrementally search over all of it in a single keypress.
orger is a tool that helps you generate an org-mode representation of your data.It lets you benefit from the existing tooling and infrastructure around org-mode, the most famous being Emacs.
Iβm using it for:
- searching, overviewing and navigating the data
- creating tasks straight from the apps (e.g. Reddit/Telegram)
- spaced repetition via org-drill
Orger comes with some existing modules, but it should be easy to adapt your own data source if you need something else.
I write about it in detail here and here.
promnesia is a browser extension Iβm working on to escape silos by unifying annotations and browsing history from different data sources.Iβve been using it for more than a year now and working on final touches to properly release it for other people.
As a big fan of #quantified-self, Iβm working on personal health, sleep and exercise dashboard, built from various data sources.
Iβm working on making it public, you can see some screenshots here.
Timeline is a #lifelogging project Iβm working on.
I want to see all my digital history, search in it, filter, easily jump at a specific point in time and see the context when it happened. That way it works as a sort of external memory.
Ideally, it would look similar to Andrew Louisβs Memex, or might even reuse his interface if he open sources it. I highly recommend watching his talk for inspiration.
Single import away from getting tracks you listened to:
from my.lastfm import scrobbles
list(scrobbles())[200: 205]
[Scrobble(raw={'album': 'Nevermind', 'artist': 'Nirvana', 'date': '1282488504', 'name': 'Drain You'}), Scrobble(raw={'album': 'Dirt', 'artist': 'Alice in Chains', 'date': '1282489764', 'name': 'Would?'}), Scrobble(raw={'album': 'Bob Dylan: The Collection', 'artist': 'Bob Dylan', 'date': '1282493517', 'name': 'Like a Rolling Stone'}), Scrobble(raw={'album': 'Dark Passion Play', 'artist': 'Nightwish', 'date': '1282493819', 'name': 'Amaranth'}), Scrobble(raw={'album': 'Rolled Gold +', 'artist': 'The Rolling Stones', 'date': '1282494161', 'name': "You Can't Always Get What You Want"})]
Or, as a pretty Pandas frame:
import pandas as pd
df = pd.DataFrame([{
'dt': s.dt,
'track': s.track,
} for s in scrobbles()]).set_index('dt')
df[200: 205]
track dt 2010-08-22 14:48:24+00:00 Nirvana β Drain You 2010-08-22 15:09:24+00:00 Alice in Chains β Would? 2010-08-22 16:11:57+00:00 Bob Dylan β Like a Rolling Stone 2010-08-22 16:16:59+00:00 Nightwish β Amaranth 2010-08-22 16:22:41+00:00 The Rolling Stones β You Can't Always Get What...
We can use calmap library to plot a github-style music listening activity heatmap:
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 2.3))
import calmap
df = df.set_index(df.index.tz_localize(None)) # calmap expects tz-unaware dates
calmap.yearplot(df['track'], how='count', year=2018)
plt.tight_layout()
plt.title('My music listening activity for 2018')
plot_file = 'hpi_files/lastfm_2018.png'
plt.savefig(plot_file)
plot_file
This isnβt necessarily very insightful data, but fun to look at now and then!
My friend asked me if I could recommend them posts I found interesting on Slate Star Codex. With few lines of Python I can quickly recommend them posts I engaged most with, i.e. the ones I annotated most on Hypothesis.
from my.hypothesis import pages
from collections import Counter
cc = Counter({(p.title + ' ' + p.url): len(p.highlights) for p in pages() if 'slatestarcodex' in p.url})
return cc.most_common(10)
my.workouts
here.
I publish my reading stats on Goodreads so other people can see what Iβm reading/have read, but Kobo lacks integration with Goodreads. Iβm using kobuddy to access my my Kobo data, and Iβve got a regular task that reminds me to sync my progress once a month.
The task looks like this:
* TODO [#C] sync [[https://goodreads.com][reading progress]] with kobo
DEADLINE: <2019-11-24 Sun .+4w -0d>
[[eshell: python3 -c 'import my.kobo; my.kobo.print_progress()']]
With a single Enter keypress on the inlined eshell:
command I can print the progress and fill in the completed books on Goodreads, e.g.:
A_Mathematician's_Apology by G. H. Hardy Started : 21 Aug 2018 11:44 Finished: 22 Aug 2018 12:32 Fear and Loathing in Las Vegas: A Savage Journey to the Heart of the American Dream (Vintage) by Thompson, Hunter S. Started : 06 Sep 2018 05:54 Finished: 09 Sep 2018 12:21 Sapiens: A Brief History of Humankind by Yuval Noah Harari Started : 09 Sep 2018 12:22 Finished: 16 Sep 2018 07:25 Inadequate Equilibria: Where and How Civilizations Get Stuck by Eliezer Yudkowsky Started : 31 Jul 2018 22:54 Finished: 16 Sep 2018 07:25 Albion Dreaming by Andy Roberts Started : 20 Aug 2018 21:16 Finished: 16 Sep 2018 07:26How much do I chat on Facebook Messenger?
from my.fbmessenger import messages
import pandas as pd
import matplotlib.pyplot as plt
df = pd.DataFrame({'dt': m.dt, 'messages': 1} for m in messages())
df.set_index('dt', inplace=True)
df = df.resample('M').sum() # by month
df = df.loc['2016-01-01':'2019-01-01'] # past subset for determinism
fig, ax = plt.subplots(figsize=(15, 5))
df.plot(kind='bar', ax=ax)
# todo wonder if that vvv can be less verbose...
x_labels = df.index.strftime('%Y %b')
ax.set_xticklabels(x_labels)
plot_file = 'hpi_files/messenger_2016_to_2019.png'
plt.tight_layout()
plt.savefig(plot_file)
return plot_file
If you like the shell or just want to quickly convert/grab some information from HPI, it also comes with a JSON query interface - so you can export the data, or just pipeline to your heartβs content:
$ hpi query my.coding.commits.commits --stream # stream JSON objects as they're read
--order-type datetime # find the 'datetime' attribute and order by that
--after '2020-01-01' --before '2021-01-01' # in 2020
| jq '.committed_dt' -r # extract the datetime
# mangle the output a bit to group by month and graph it
| cut -d'-' -f-2 | sort | uniq -c | awk '{print $2,$1}' | sort -n | termgraph
2020-01: ββββββββββββββββββββββ 458.00 2020-02: βββββββββββββββββββββ 440.00 2020-03: ββββββββββββββββββββββββββ 545.00 2020-04: ββββββββββββββββββββββββββββ 585.00 2020-05: βββββββββββββββββββββββββ 518.00 2020-06: ββββββββββββββββββββββββββββββββββββ 755.00 2020-07: ββββββββββββββββββββββ 467.00 2020-08: βββββββββββββββββββββ 449.00 2020-09: ββββββββββββββββββββββββββββββββββββββββββββββββββ 1.03 K 2020-10: ββββββββββββββββββββββββββββββββββββββ 791.00 2020-11: βββββββββββββββββββββββ 474.00 2020-12: ββββββββββββββββββ 383.00
See query docs for more examples
Iβve got some code examples here. If youβre curious about any specific data sources Iβm using, Iβve written it up in detail.Also see βData flowβ documentation with some nice diagrams explaining on specific examples.
In short:
- The data is periodically synchronized from the services (cloud or not) locally, on the filesystem
As a result, you get JSONs/sqlite (or other formats, depending on the service) on your disk.
Once you have it, itβs trivial to back it up and synchronize to other computers/phones, if necessary.
To schedule periodic sync, Iβm using cron.
my.
package only accesses the data on the filesystemThat makes it extremely fast, reliable, and fully offline capable.
As you can see, in such a setup, the data is lagging behind the βrealtimeβ. I consider it a necessary sacrifice to make everything fast and resilient.
In theory, itβs possible to make the system almost realtime by having a service that sucks in data continuously (rather than periodically), but itβs harder as well.
I donβt consider Python unique as a language suitable for such a project. It just happens to be the one Iβm most comfortable with. I do have some reasons that I think make it specifically good, but explaining them is out of this postβs scope.
In addition, Python offers a very rich ecosystem for data analysis, which we can use to our benefit.
That said, Iβve never seen anything similar in other programming languages, and I would be really interested in, so please send me links if you know some. Iβve heard LISPs are great for data? ;)
Overall, I wish FFIs were a bit more mature, so we didnβt have to think about specific programming languages at all.
Yes!- you can plug in your own data
- most modules are isolated, so you can only use the ones that you want to
- everything is easily extensible
Starting from simply adding new modules to any dynamic hackery you can possibly imagine within Python.
- installing/running and potentially modifying Python code
- using symlinks
- potentially running Cron jobs
If you have any ideas on making the setup simpler, please let me know!
The modules contain no data, only code to operate on the data.Everything is *local first*, the input data is on your filesystem. If youβre truly paranoid, you can even wrap it in a Docker container.
There is still a question of whether you trust yourself at even keeping all the data on your disk, but it is out of the scope of this post.
If youβd rather keep some code private too, itβs also trivial to achieve with a private subpackage.
Sure, maybe you can achieve a perfect system where you can instantly find and recall anything that youβve done. Do you really want it? Wouldnβt that, like, make you less human?
Iβm not a gatekeeper of what it means to be human, but I donβt think that the shortcomings of the human brain are what makes us such.
So I canβt answer that for you. I certainly want it though. Iβm quite open about my goals β Iβd happily get merged/augmented with a computer to enhance my thinking and analytical abilities.
While at the moment we donβt even remotely understand what would such merging or βmind uploadingβ entail exactly, I can clearly delegate some tasks, like long term memory, information lookup, and data processing to a computer. They can already handle it really well.
What about these people who have perfect recall and wish they hadnβt.
Sure, maybe it sucks. At the moment though, my recall is far from perfect, and this only annoys me. I want to have a choice at least, and digital tools give me this choice.
Probably, at least to some extent.
First, our lives are different, so our APIs might be different too. This is more of a demonstration of whatβs Iβm using, although I did spend effort towards making it as modular and extensible as possible, so other people could use it too. Itβs easy to modify code, add extra methods and modules. You can even keep all your modifications private.
But after all, weβve all sharing many similar activities and using the same products, so there is a huge overlap. Iβm not sure how far we can stretch it and keep modules generic enough to be used by multiple people. But letβs give it a try perhaps? :)
Second, interacting with your data through the code is the central idea of the project. That kind of cuts off people without technical skills, and even many people capable of coding, who dislike the idea of writing code outside of work.
It might be possible to expose some no-code interfaces, but I still feel that wouldnβt be enough.
Iβm not sure whether itβs a solvable problem at this point, but happy to hear any suggestions!
- Itβs not vaporware
The project is a little crude, but itβs real and working. Iβve been using it for a long time now, and find it fairly sustainable to keep using for the foreseeable future.
- Itβs not going to be another silo
While I donβt have anything against commercial use (and I believe any work in this area will benefit all of us), Iβm not planning to build a product out of it.
I really hope it can grow into or inspire some mature open source system.
Please take my ideas and code and build something cool from it!
One of HPIβs core goals is to be as extendable as possible. The goal here isnβt to become a monorepo and support every possible data source/website to the point that this isnβt maintainable anymore, but hopefully you get a few modules βfor freeβ.
If you want to write modules for personal use but donβt want to merge them into here, youβre free to maintain modules locally in a separate directory to avoid any merge conflicts, and entire HPI repositories can even be published separately and installed into the single my
python package (For more info on this, see MODULE_DESIGN)
Other HPI Repositories:
If you want to create your own to create your own modules/override something here, you can use the template.
Similar projects:- Memex by Andrew Louis
- Memacs by Karl Voit
- Me API - turn yourself into an open API (HN)
- QS ledger from Mark Koester
- Dogsheep: a collection of tools for personal analytics using SQLite and Datasette
- tehmantra/my: directly inspired by this package
- bcongdon/bolero: exposes your personal data as a REST API
- Solid project: personal data pod, which websites pull data from
- remoteStorage: open protocol for apps to write data to your own storage
- https://perkeep.org[Perkeep]: a tool with https://perkeep.org/doc/principles[principles] and esp. https://perkeep.org/doc/uses[use cases] for self-sovereign storage of personal data
- https://www.openhumans.org[Open Humans]: a community and infrastructure to analyse and share personal data
Other links:
- NetOpWibby: A Personal API (HN)
- The sad state of personal data and infrastructure: here I am going into motivation and difficulties arising in the implementation
- Extending my personal infrastructure: a followup, where Iβm demonstrating how to integrate a new data source (Roam Research)
Open to any feedback and thoughts!
Also, donβt hesitate to raise an issue, or reach me personally if you want to try using it, and find the instructions confusing. Your questions would help me to make it simpler!
In some near future I will write more about:
- specific technical decisions and patterns
- challenges I had so solve
- more use-cases and demos β itβs impossible to fit everything in one post!
, but happy to answer any questions on these topics now!