Follow the Data

A data driven blog

Archive for the tag “health”

Health Hack Day ’12: Day 1 impressions

So as mentioned in the previous post, Health Hack Day ’12 in Stockholm is underway right now; it started with a number of lectures and a party yesterday and the actual hacking will start today, with the winning apps to be presented tomorrow. You can follow the #hhd12 hashtag on Twitter or go to the link above to see the recorded lectures.

I thought the arrangements and speaker line-up yesterday were surprisingly good, which bodes well for the survival of the Health Hack Day concept, in fact I’m sure they will be back next year. The lectures (which were recorded and can be viewed online at the link above) were given in a smallish space (part of a fin de siècle apartment complex now used as an office hotel for creative types, located near Stureplan in central Stockholm) decorated with thousands of yellow strips of paper hanging down from the ceiling – a nice-looking installation which also provided some relief from the heat in the room when the wind occasionally blew in through the window and turned the paper strips into a giant ceiling fan. Meanwhile, visitors could sip some excellent free coffee (from Stockholm roast).

Hoa Ly is a young, enterprising fellow who works for Psykologifabriken (“The Psychology Factory”) and his own sister company Hoa’s Tool Shop (both of these companies were involved in arranging the event), as well as doing clinical psychology research at Linköping university and being a successful DJ. He talked about behavior change through digital tools, exemplifying with the Viary mobile & web app which has been used successfully for depression treatment but, as I understand it, is quite general in nature so you could track any kind of behavior & goals (incidentally, the statistics interface looks a lot like the WordPress interface where I look at access statistics for this blog!) Hoa also talked about correlating data from different sources like Viary, the Zeo sleep tracker and exercise data from heiaheia.com. Integrating data from different sources is of course very interesting but I didn’t feel we quite got any really solid concrete examples here, just a general sense that it should be useful. Anyway. The most intriguing part of Hoa’s talk was when he described the launch of a new project to “disrupt the whole dance music industry” (or words to that effect). The idea is to treat DJ performances as scientific experiments and “gather data from the audience”, for instance by measuring adrenaline levels in response to song selections. Hoa and his partners have created a new  country called Yamarill (link in Swedish) to construct a narrative around which this project will be built. The inauguration of the new country will apparently be celebrated on June 1 at the Hoa’s Tool Shop office spaces. The Yamarill “delegation” has already played several DJ gigs “combining electronic dance music, technology and psychology” as they say in the linked interview (I might also add “quirky clothes”).

Pernilla Rydmark from .SE talked about different forms of crowdfunding and presented five Swedish platforms for it. .SE is also introducing an interesting form of funding called “guaranteed funding” where they pick projects that are already popular on crowdfunding platforms and promise to fund them up to their stated goal in case they don’t succeed in reaching it through the crowdfunding platform. Thus, the goal of the funding is rather paradoxically that no one should get it (because .SE is hoping that the projects will get fully funded by the crowd.)

Bill Day from Runkeeper talked about the need for an open, global health platform and presented HealthGraph, a free platform with tens or millions of users initiated by the RunKeeper team but which is expanding far beyond that community.

Mathias Karlsson from Calmark presented his company’s approach to rapid blood biomarker testing, which is making consumable platforms for colorimetric assays (the measurement of interest is transformed into a color) which can be analyzed on the spot using, for example, a smartphone camera. He brought a developer team who will attempt to build a new test (for bilirubin) into the platform in 24 hours during the hackathon part of the event.

Linus Bengtsson from FlowMinder described intriguing reality mining (or in less spectacular terms, call log analysis) work where data from mobile phone providers was used to track the movements of people during and after the Haiti earthquake, and the subsequent cholera outbreak. Linus and his team tracked 1.9 million SIM cards from Port-Au-Prince residents to obtain their estimates on migration patterns. FlowMinder is a non-profit and provides free analysis of the same kind during any kind of global disaster (in collaboration with mobile telephony providers, naturally.)

Sara Eriksson and Johan Nilsson from United Minds talked about the “new health”, including a lot of topics that have been frequently mentioned on this blog, like 23andme, PatientsLikeMe, and even the MinION sequencer from Oxford Nanopore. I had heard / thought about most of it before but what I took away from it was the concept of “biosociality” as coined by Paul Rabinow, and also that only 37% of surveyed Stockholm smart phone users did *not* want to collect data on themselves through the phone; a whopping 59% wanted not only to collect the data but to analyze it themselves.

Megan Miller from Bonnier (a Swedish media company which has an enormous influence in the media here; however Megan was working for its US branch) described Teemo, a platform for “digital wellness”, with components of collaborative adventuring and social exercise (you try to accomplish “quests” together with your friends by exercising.) Teemo looks like it has a pretty nifty design, inspired by paper cuts and Nordic (=Helsinki?) design style. As Megan put it, Teemo wants to “put fun first and track behavior in the background.)

We will see whether Follow the Data has the energy to visit again tomorrow and see what apps have come out of the hackathon, which should be starting in a few hours from now!

Food and health data set

I stumbled into an amazing dataset about food and health, available online here (Google spreadsheet) and described at the Canibais e Reis blog. I found it through the Cluster analysis of what the world eats blog post, which is cool, but which doesn’t go into the health part of the dataset. By the way, the R code used that blog post is useful for learning how to plot things onto a map of the world in R (and it calculates the most deviant food habits in Mexico and USA as a bonus). Also note the first line:

diet<-read.csv(“http://spreadsheets.google.com/pub?key=tdzqfp-_ypDqUNYnJEq8sgg&single=true&gid=0&output=csv&#8221;)

which reads the data set directly from an URL into an R data structure, ready to be manipulated. I think it’s pretty neat, but then I am easily impressed.

The Canibais e Reis author was interested in data on the relationship between nutrition, lifestyle and health worldwide, but those data were dispersed over various sources and used different formats. He therefore (heroically) combined information from sources like the FAO Statistical Yearbook (for world nutrition data), the British Heart Foundation (for world heart-related, diabetes, obesity, cholesterol etc. disease statistics) and the WHO Global Health Atlas and WHO Statistical Information System (for general world health statistics like mortality, sanitation, drinking water, etc.) After cleaning up the data set and removing incomplete entries, he ended up with a complete matrix of 101 nutrition, health and lifestyle variables for 86 countries. Let the mining begin!

As the blog post describing the data points out, there’s bound to be a lot of confounding variables and non-independence in the data set, so it would be a good idea to apply tools like PCA (see e.g. the recent article Principal Components for Modeling), canonical correlation analysis or something similar to it as a pre-processing step. I haven’t had time to do more than fiddle around a bit – for example, I ran a quick PCA on the food related part of the matrix to try to find out the major direction of variation in world diets. The first principal component (which, at 19.8%, is not very dominant) reflects a division between rice eating countries and “meat and wheat” countries with high consumption of animal products, wheat, meat and sugar.
Canibais e Reis provides a dynamic Excel file where some different types of analysis have been performed. It’s fun to explore the unexpected correlations (or absent correlations) that pop up (the worksheets BEST and WORST in the Excel file). One surprising finding that emerges is that cholesterol is not correlated to cardiovascular disease across this data set (in fact there is a slight negative correlation).

My favourite finding, though, is that cheese consumption is not correlated to death from non-communicable diseases or cardiovascular diseases. Those correlations may be massively influenced by confounding variables, but they are negative enough that I choose to continue chomping on those cheeses …

Individualized cancer research

I have been intrigued for some time by Jay Tenenbaum‘s idea to forget about clinical cancer trials and focus on deep DNA and RNA (and perhaps protein) profiling of individual patients in order to optimize a treatment especially for the given patient. (See e.g. this earlier blog post about his company, CollabRx.)

Tenenbaum and Leroy Hood of the Institute for Systems Biology recently wrote about their ideas in an editorial called A Smarter War on Cancer:

One alternative to this conventional approach would be to treat a small number of highly motivated cancer patients as individual experiments, in scientific parlance an “N of 1.” Vast amounts of data could be analyzed from each patient’s tumor to predict which proteins are the most effective targets to destroy the cancer. Each patient would then receive a drug regimen specifically tailored for their tumor. The lack of “control patients” would require that each patient serve as his or her own control, using single subject research designs to track the tumor’s molecular response to treatment through repeated biopsies, a requirement that may eventually be replaced by sampling blood.

This sounds cool, but my gut feeling has been that it’s probably not a realistic concept yet. However, I came across a blogged conference report that suggests there may be some value in this approach already. MassGenomics writes about researchers in Canada who decided to try to help an 80-year-old patient with a rare type of tumor (an adenocarcinoma of the tongue). This tumor was surgically removed but metastasized to the lungs and did not respond to the prescribed drug. The researchers then sequenced the genome (DNA) and transcriptome ([messenger] RNA) of the tumor and a non-tumor control sample. They found four mutations that had occurred in the tumor, and also identified a gene that had been amplified in the tumor and against which there happened to be a drug available in the drug bank. Upon treatment with this drug, all metastases vanished – but unfortunately came back in a resistant form several months later. Still, it is encouraging to see that this type of genome studies can be used to delay the spread of tumors, even if just for a couple of months.

A while back, MIT Technology Review wrote about a microfluidic chip which is being used in a clinical trial for prostate cancer. This chip from Fluidigm is meant to analyze gene expression patterns in rare tumor cells captured from blood samples. It is hoped that the expression signatures will be predictive of how different patients respond to different medications. Another microfluidic device from Nanosphere has been approved by the U.S. Food and Drug Administration to be used to “…detect genetic variations in blood that modulate the effectiveness of some drugs.” This would take pharmacogenomics – the use of genome information to predict how individuals will respond to drugs – into the doctor’s office.

“You could have a version of our system in a molecular diagnostics lab running genetic assays, like those for cystic fibrosis and warfarin, or in a microbiology lab running virus assays, or in a stat lab for ER running tests, like the cardiac troponin test, a biomarker to diagnose heart attack, and pharmacogenomic testing for [Plavix metabolism],” says [Nanosphere CEO] Moffitt.

Update 10 Dec:

(a) Rick Anderson commented on this post and pointed to Exicon, a company that offers, among other things, personalized cancer diagnostics based on micro-RNA biomarkers.

(b) Via H+ magazine,  I learned about the Pink Army Cooperative, who do “open source personal drug development for breast cancer.” They want to use synthetic biology to make “N=1 medicines”, that is, drugs developed for one person only. They “…design our drugs computationally using public scientific knowledge and diagnostic data collected from the individual to be treated.”

Post Navigation