I was a bit cryptic about my conversation with James yesterday, not knowing whether his project was public or not. He has announced a study to collect 1 million people on Facebook: https://socialheartstudy.org/
"Phase I of the Social Heart Study will help participants estimate their own risk for future heart attacks, and help us see if online social network use is associated with cardiovascular health. Phases II and III will enable other social network features and start enrolling participants in prevention research studies."
This is a tectonic shift in the way we look at medical information and research. ("23 and We" from http://23andme.com is playing with a similar concept in the area of genomics. I've just signed up to be the first on my block to have my entire exome sequenced: https://www.23andme.com/exome/ so I'll have 20 million base pairs to play with.)
It's curious how similar genomic and social network graphs are (James also studies genomics), and how suitable both of them are to a semantic web/metadata approach. The Linked Data Cloud http://linkeddata.org/ shows some of this activity.
VistA grew up in the era of Moore's law. We replaced mainframes with mini computers (PDP-11, VAX), then microcomputers. We could adjust to this over years or decades.
Social Networks (the rise of Facebook, for example) and genomic information (plus epigenomic... ) make Moore's law look like a slowpoke. By the time the software we are talking about here rolls out, we will have widespread full-genome sequencing, pervasive real time body sensor feeds, enormous privacy connundrums (well, more enormous) regarding social networks, privacy, genomic information, face recognition, personalized medicine, and more to contend with.
The moral of the story is that we have to deal with all this change at the semantic web/meta level, not just hard-coding what things looked like 15 years ago.