Archive for the 'Science/Medicine' category

 | November 2, 2011 6:07 pm

Since reading Malcolm Gladwell’s “Outliers” several years ago, I’ve been really interested in the question, “What results in success?” Part of the interest is intellectual; the psychology of success is fascinating and surprising. Part of the interest is developmental; like most people, I want to cultivate habits that lead to achievement and impact.

Most of the interest, though, is personal. One of the hats I frequently wear is that of an educator. I’ve mentored medical students, engineering students, and computer science students; and I really enjoy it. More than that, though, I enjoy seeing people succeed. When someone comes up with an improved treatment, product, or idea; it improves the world. I know it’s corny, but still true.

For this reason, I’m fascinated by questions like:

  • What does it mean to be “world-class”?
  • Why it is that so many people never arrive?
  • Are there habits that can be cultivated, traits that can be taught, or ways to share knowledge that can facilitate the journey.

It also shouldn’t come as a surprise that I read widely on the subject. This morning, I came across a marvelous article on the 99percent.com, which looks at several of the topics related to world-class success. As good as the article is, though, I really enjoyed the TEDxBlue video that the article linked to. In it, University of Pennsylvania psychologist Angela Duckworth expounds on her theories about what makes world-class success.

According to Duckworth, it isn’t intelligence and it isn’t talent. She even argues that it isn’t self discipline, according to the common definition of the word. Rather, what matters is “grit.”

The video is about 18 minutes long and well worth the time.

Show me more… »

 | September 25, 2011 4:33 pm

Amongst horse trainers, there is a common piece of advice. It goes like this: “Experiment, observe, remember, and compare.”

This monicker is usually given in answer to questions such as, “How do I learn to understand my horse better?” or “How can I know if a particular technique will work for me?” But as useful as it might be for students of the horse, “Experiment, observe, remember, and compare” is sound counsel for just about anyone. (It summarizes the methodology of the scientific method, after all.)

For the most part, the work of observation is personal. There are cases, though, where it is possible to see the world through the eyes of another. This might be through artwork or explanation, but when it happens, it can be magical. Such is the work of Ernst Haeckel, a 19th century biologist, naturalist, philosopher, physician, professor, and artist.

Haeckel spent his career trying to understand the relationship between the different species of the animal kingdom. It was a life of intense observation and comparison, which is readily apparent in his artwork. Below, you can find some of my favorite folios from his books. All of the images are available from Wikimedia Commons. As are many others.

Show me more… »

 | June 29, 2011 5:29 pm

I’ve always had a soft spot for contrarians and polemics. Yes, they can be obnoxious and reactionary. Yes, they champion laughable ideas in the name of controversy. And yes, they are frequently wrong — common wisdom being common for a good reason, in that it is grounded, demonstrated, and robust.

But for all that, contrarians and polemicists also play an important role. They require us to think deeply and broadly about topics that we might otherwise consider as simple. This can take us in unexpected directions and result in a deeper understanding of questions we might consider settled. What is string theory, after all, if not the ruminations of dissatisfied physicists who fought their way to respectability?

1.

I bring all of this up, because, for the past several days, I’ve been interested in a contrarian response to a polemic statement. (No, I’m not sure that makes any sense.) Here’s what’s going on.

I’m a big fan of the NPR show, To the Best of Our Knowledge. Each week, they do a marvelous job of taking controversial ideas and then exploring them in an evenhanded and profound way.

(Notice I did not say fair or balanced. Trying to be fair and balanced — in a word, objective — in your reporting is one of the fastest pathways to stupidity I’ve ever encountered. It leads to pretending that there is two sides to every debate, and that both sides are worth covering.)

Some weeks ago [1], To the Best of Our Knowledge did a broadcast about “Losing Religion” where they interviewed Phil Zuckerman. Zuckerman, a sociologist, recently finished a year-long effort to understand the religion of Nordic peoples (Swedes, Danes, etc.) and how those beliefs relate to the society in which they live.

In general, it sounds like a fantastic book and I have little doubt it will end up on my nightstand. Yet, as part of the interview, he said something that really got under my skin. Essentially, “that God doesn’t have to be present to have a moral society” followed by:

It’s a sociological assertion in a way. “We need to have a strong belief in God and this will result in a moral society.” Well, that’s something that we can go look at, we can go check it out. We can quote/unquote “test it.” The fact is, that in the world today, in places like Scandinavia and elsewhere where religion is marginal, these societies are quite moral, quite ethical, especially when compared to very religious societies such as our own (American Society).

Let me be clear here. I’m not disputing his primary claim: non-religious people can be excellent human beings. They can do good (nay, exceptional) things. For example, I greatly respect the work of militant atheists — such as Christopher Hitchen, Ayaan Hirsi Ali, and Philip Pullman (amongst others) — who force us to consider how religion and modern life intersect. Where I draw the line, however, is the claim that we can somehow measure and test the link between a belief in God and the ultimate morality of society.

2.

This question, though fascinating, cannot be addressed by reductionist science. For starters, how do you measure a belief in God? Or, how do you assign a score to “spirituality?” As Phillip Pullman has said:

Religion is something that has existed in every human society that we know about and it’s an impulse every human being has in one way or another. I call myself a religious person although I don’t believe in God … Questions of purposes and origins — “Why are we here?”, “What is the purpose of life?”, “Why is good better than evil?” — are religious questions, and I ask them all the time. [2]

Indeed, I would go so far as to say that we all do, whether we are prepared to call our search “religious” or not. (And even if we choke on the “R” word, choosing instead to go with a less historically laden equivalent — spiritual, connection, purpose —  it’s still religion.)

Second, how do you divorce religion from its cultural trappings? While the Nords may not revel in the “inner” nature of the religious journey the way Americans do (or vocalize their adherence to it in quite the same way), they still follow the outer trappings. In Sweden, for example: 7 of 10 are Christenized in the Church of Sweden (this statistic does not include other religious baptismal ordinances), 5 of 10 weddings take place in a church, and 9 of 10 have a Christian burial. This is despite the fact that only 1 in 10 say religion is important to their daily life. These sorts of trappings may not reliably indicate inner belief, but still influence behavior and conformity.

Third, there is a question of time and trend. Though we don’t often like to admit it, we are the beneficiaries of ten thousand years of human legacy. Our modern, interlinked, technological utopia (and I use that word quite deliberately and without sarcasm), is the result of generations of people trying to improve the lot of their children. The trends toward better health, improved nutrition, greater knowledge, and more thoughtful compassion span millennium. As an example, consider our move toward greater peace, which Steven Pinker discusses in the TED talk below.

In such a complex web, I defy you to control for the positive (or even negative) effects of religion (as compared to, say politics). Scientists may have excellent tools and statistical tricks which can tease out some relationships, but these only go so far. In some cases, the system is irreducibly complex. You may be tempted to point at elevated rates of murder and spousal abuse amongst the “bible states” of the American South and scream, “Religious Bigotry!” And … you would be wrong. Those increased rates of murder have at least as much to do with primitive farming culture — whether someone’s great, great, great, great grandfather chose to raise sheep or wheat — as they do with personal answers to the question of “What is the point of all this?” [3].

3.

And so, I come to the point: sometimes we need to be contrarian about our polemic viewpoints. Good polemicists prod us toward improved understandings by forcing us to wrestle with complexity. Bad polemicists are just spoiling for a fight. Unfortunately, we often afford too much attention to the latter.

For the past few years, it’s become fashionable to use science to beat up on belief. I think that’s wrong [4] . Questions of religion, morality, and ethics are very complex, and sometimes they can’t be boiled down to valid, scientific assertions.  Science, certainly a versatile investigative tool, is ill-equipped to answer answer questions of motivation, purpose, or origin. It does a superb job of “How?”, but often chokes when confronted with “Why?”

Which is to say, “God (or religion) must (or must not) be present to have a moral society” is not a sociological assertion. It’s an ethical question, and though science may bring some identifiably interesting points to the discussion, it cannot provide a definitive answer. Religion (and for that matter, God) simply work at much too broad and personal a scale.

__________________________________________

[1] This might actually be months or years ago. I tend to listen to the podcast and when I hear the program isn’t necessarily related to when it was originally broadcast.
[2] The whole of Pullman’s interview is worth listening to. A recording can be found at the To the Best of Our Knowledge website.
[3] For an introduction, see Chapter 6 of Outliers by Malcolm Gladwell (pages 161 – 168).
[4] I’m speaking to the larger questions of militant atheism, which attempts to directly demonstrate that “Religion Poisons Everything” (to use Christopher Hitchen’s tagline). I’m not sure that Phil Zuckerman goes quite so far, though he does rather strongly imply a similar point in his NPR interview.

 | March 31, 2011 7:50 pm

In the Open Source Writing book, I’ve got a section on Visual Thinking. While I’m hardly an expert, the way we perceive and understand the world is a serious interest of mine an I read everything I can find on the subject.

This morning, while I was fact checking a few things for the book, I came across this video by Tom Wujek. In it, he talks about the neuroscience of understanding and a few of the ways that the brain interacts with the world of ideas. Though a bit dated (from the 2009 TED conference), it’s still excellent.

Of related interest, you may want to take a look at the TED project he references, TED Big Viz.

 | January 28, 2011 4:36 pm

Challenger Explosion - January 28, 1986Today was the 25th anniversary of the Space Shuttle Challenger disaster.

Considering that it is a major anniversary of a catastrophic event, I’ve been somewhat surprised at the response. Or, I guess I should clarify, the lack of one.

A quick search on Google News shows that it is being covered, but the world seems far more focused on the happenings in Egypt and the upcoming launch of Verizon’s iPhone. While the norm for most sites was thundering silence, I found one major exception in my RSS feed this morning, a photo essay posted on the Fox is Black.

Compiled by Alex Dent, it includes the footage of the explosion and is accompanied by a haunting model of the smoke plume. In closing Dent says, “Today is the 25th anniversary of the Challenger disaster. There are no words.”

When I first saw the images, I wasn’t quite sure what to make of them. I wasn’t sure what they were and I was puzzled. Then, after I reviewed a few more in the series, I recognized what they depicted, which triggered a reaction very similar to the first time I saw footage of the Challenger disaster.

Show me more… »

 | December 9, 2010 1:46 am

I’ve got this thing against mushrooms and fungus.  I don’t like them, they don’t like me, and we don’t get along at all (in the shock inducing, “I’m going to kill you” sort of way).  Regardless, just because I don’t like them doesn’t mean that I don’t respect them.  This video from BBC Planet Earth shows why you should respect them, too.

I think that the bits about cordyceps fungus (a parasitic specimen that takes over your brain before killing you) tends to justify my overall feelings toward the little brutes.  Check out what it does to a group of bullet ants about 4 minutes in.

Get the Flash Player to see this video.

Via Boing Boing.

 | October 20, 2010 5:32 pm

Though I love it, I sometimes think that Twitter should be banned.  This isn’t because it’s somehow vile or evil.  Quite the opposite, actually.

Twitter should be banned because it provides access to lovely and interesting ways to fragment my attention and waste time.  Given that time is my most limited commodity right now, this is spectacularly bad.  If I had an ounce of self-restraint, control, or prudence it might not be so disastrous.  But I don’t.  I’m worse than a crow chasing a sparkling toy.

For this reason, @BoraZ on Twitter shot my whole morning.  He posted the following questions: “What is a scientist?” and “How do we handle scientific imposters?” along with the link to this article by JL Vernon.  Look, shiny!

While I think you should head over to the post and read Vernon’s comments in whole, here’s the take home message: we should give careful thought to who is allowed to wear the mantle of scientist.  If we don’t, crackpots and crazies will drown out what legitimate scientists have to say.

It turns out that I was thinking about the same question this morning, except from a different angle.  Here is my response.

Interesting article and thanks for sharing. I was thinking about this very question earlier this morning and your article raises an important point. It highlights the dangers of crackpots and whackjobs and points out why we should be wary of them.

I agree with many of your points wholeheartedly.  Controversy is a bad thing that can greatly harm the prestige and public confidence that Science enjoys.

But I am not sure that making science more austere and requiring additional investments of time, energy, training or a “proper” affiliation is a good idea either.  Already, to earn the label “scientist” requires 7 to 10 years worth of study and work. To get into a training program is difficult and to successfully graduate can be excruciating.

The process selects for a certain type of individual: aggressive, self-motivated, intelligent, (typically) single, and career oriented. There are probably other characteristics I am overlooking. On the whole, these are all good things. But it also leads to a certain degree of homogeneity amongst professional communities and that is bad. Whenever I attend a conference, for example,  I am usually dismayed at the lack of diversity. Most of the people tend to look like me (white, male, 30s), but worse, they sound like me.

The cocktail banter is boring and we mostly buzz about the same things.  We discuss about the same ideas, compete for the same funding, and even use the same justifications for research.  We also revere the same people, most of whom are academics and come from “top research institutions.”   Non-academics tend to be ostracized and ignored.

It’s my opinion that this is a serious problem, and that the rigors of specialization are making it worse.  Moreover, it blinds the community to the fact that some of the most interesting stuff is happening outside of academics and traditional science.

Consider:

  • Bill Gates is providing a great deal of money to non-profits and even to corporations. Some of that money is even going to people without traditional research training or who wouldn’t otherwise be considered credentialed and “proper” scientists. (These impressions are mostly from perusing the foundation website and reading a blog posting about it, so please correct me if I’m wrong.)
  • Microsoft Research employs an enormous number of mathematicians and researchers looking into basic science questions. They even share their results liberally via traditional routes (journals, peer-reviewed conferences, etc.). But because of their employer is a for-profit institution, much of the work is ignored.
  • Some of the most interesting work on sustainable and renewable energy is coming from start-ups, not government or University labs

In many of the cases, it might be a stretch to apply the label of “scientist” to the people doing the work, but I think it fitting. Going even further, I would love to see such individuals invited into the scientific dialogue to discuss their activities and successes formally. There is a huge amount that we could learn, and it might even lead to a new Scientific Renaissance. There is also clearly a demand for such a venue, as seen by the rise of events like TED which spans disciplines and experiences.

To say nothing of the talent that we are neglecting.

Some of the smartest, most capable and intelligent people I have worked with will never wear the label “scientist” because they have no desire for graduate study.  Yet, they still innovate, discover and write. Unfortunately, their contributions tend to be buried.  Either they are ignored because they lack the proper credentials, or the real authors appear behind others who had little to do with the project or the thinking behind it.

Or, worse, they leave the field entirely because there are no advancement opportunities.  In a world increasingly driven by formal credentials,  such people will never be given promotion or independence.  (This is particularly egregious in medical research where if you aren’t a doctor, you are nothing. Many excellent researchers are nurses or support staff who, because of age, families and past performance will never be able to enter medical school and will never be doctors.)

Before I digress too much, here’s the point. Labels and titles matter, but so does inclusion of the non-traditional elements of our communities. When we are too austere and formal, we exclude interesting ideas and people from the discussion. That is ultimately harmful to everyone. (For a really good discussion on this, see the recent book, “Where Good Ideas Come From.”)

I think that excluding potential talent is a far larger problem that navigating the PR effects of crackpots. After all, in the latter case, there are things that can be done. Introductions and explanations can be made, articles can be written, responses posted. That’s how the formal world of scientific dialogue works, after all. If a point is controversial, then other scientists attempt to duplicate the results.  Consensus is not the result of a single article, speech or debate.  It is the result of many people working on a common challenge, and might involve a few spectacular fights.

The answer to questionable speech is more speech and transparency. It’s how scientific dialogue has worked for centuries, and the primary reason for the current prestige Science now enjoys.  Trying to censor critics and crackpots doesn’t make them go away, or cause their message to vanish.  It simply sends them elsewhere.  It can, however, limit legitimate dissent and opposing viewpoints; to say nothing of contributions by those who don’t tidily meet the label of “scientist.”

 | September 24, 2010 6:38 pm

Note: The normal programming of the website has been interrupted by a need to get the Open Source Writing book done.  I fully intend to pick things up very soon, but I first must send a finished draft to my editor/publisher.  They’ve been very patient and I’ve been … somewhat … irresponsible.

With that said, in addition to the topic on the Scribus mailing list, there’s also an interesting discussion happening on the LyX mailing list.  This one is about real time collaboration and whether it makes sense to include those features in a word processing package.

Though the debate is specific to LyX, it raises larger questions that are also relevant to the Open Source Writing book.  Such as:

  • Is version control software (VCS) a good way to exchange information with co-authors?
  • What is the best way to request feedback and merge changes?
  • How might the equivalent of a “track changes” feature be implemented for users of different platforms and programs?
  • Should collaboration software be part of the desktop?  If not, how might it be implemented?

Since this is the topic of another book chapter, again, I would appreciate feedback and thoughts.  As with the other posting, the entire history can be found via the LyX mailing list archives.

______________________________________________________________

Hi Jose and other LyX-Users,

Very interesting articles, thanks for sending them. [In reference to an earlier posting.  The links pointed at collaboration features for AbiWord and Google Docs.]  While trying to digest the ideas, though, I found myself asking two questions and I’d be interested in your feedback.

The first question, of course is spurred by pure skepticism.

When is real time collaboration useful?

In what instances do you think this feature would be useful?

For my part, I’m not a collaborative writer.  I don’t think well in the presence of others and I hate writing with an audience.  My one and only experience with Google Wave was a nightmare.  People could see just how much backspacing was involved in my replies!  It was deeply humiliating and I’m quite glad that Wave died.

(Unfortunately, this whole real time collaboration thing is the next major front in online communication, and I’m sure others will take up the mantle.  Pity.)

But I’m probably not representative of the general population.  Even VCS collaboration often feels too “real time” for me (though I use it and heartily recommend it to others).  I much prefer distinct drafts (PDF) sent via email.  Even better is paper sent via post.   This allows for me to organize my commentary and deliver an overall impression and specific recommendations

(To be clear, I prefer this arrangement when editing and when receiving feedback.)

The Pain of Expected Features

However, with all that said, real time collaboration is becoming an expected feature.

AbiWord and Google Docs have it, OpenOffice is talking about it, and MS Word even has a rudimentary option.  I have several colleagues that have moved to Google Docs specifically because of the real time collaboration options.  (Even though they’ve never actually used them, at least to the best of my knowledge, and the editing experience is hideous in every other respect.)

Such people take take solace in knowing that the features are present and “would never move to a platform that didn’t have them.”  I’ve even heard this from the small cadre of users I’ve converted to LyX.  To put it another way, real time document collaboration is a marketing feature that became an expected option.

Unfortunately, marketing features matter.  They differentiate Program A from Program B and provide a talking point.  Then, because they’re talked about, they become part of the criteria by which a program is judged.

If you need an example, look at what Google Docs has done with the benighted real time collaboration of Google Docs.  The presence/lack of a collaboration feature has become a standard part of any  word processor review.  Journalists talk about it as though it were important.  Microsoft Word 2007 was knocked on ZdNet, for example, because it wasn’t present.  MS Word 2010 was lauded because it was (even though it sucks).

Yet, I’ve never actually met anyone who writes with others in real time.

(The only counterexample I can think of was an exchange with Michael Foord, who uses it to start program documentation.  But when I pressed him, what Michael described was more of an outlining tool and could easily be created via an interactive whiteboard rather than a full-featured real-time editing environment.)

What should collaboration and document exchange look like?

Which leads to my second question.  What should real time collaboration look like in order to be helpful?  Should it be built into an IM client (ala screensharing) with voice and video?

Or would it be better as an online service?

Is integration into a desktop writing program necessary? Or would an implementation similar to the MS Word 2010 version be more appropriate, which is a hybrid approach?

You [in reference to the individual who started the thread] have advocated for this strongly and I would love to hear your opinions on the above questions.  What would be most helpful for your work?

Based on other implementations, what doesn’t work quite so well?

As more tools release similar real time solutions, I think calls for something similar in LyX will increase.  Not necessarily because it’s useful, but because it’s expected.  And yes, I know that this is a terrible reason to add new features.  Which is actually my general point.

Current implementations of real-time editing are generally awful.  A desktop level approach would be infinitely superior to the approach we are seeing now where each word processor does its own thing.

So, if the feature doesn’t fit within LyX, perhaps we could send the use case scenarios and discussion to another project where it did fit?  The natural fit, at least to my mind, would be one of the IM clients.  Perhaps Empathy?

Anyone else have any thoughts?

Cheers,

Rob Oakes

 | August 26, 2010 8:13 pm

D'Medici Family Coat of ArmsFor the past few weeks, I’ve been working my way through a book called “Cartographies of Time.”  I’m only about a third of the way through it so far, but it is a fabulous book that both deserves (and shall get) its own post and a proper review.  (Maybe even a whole series.)

But while I am not quite ready to dive into that project, there is one aspect of Cartographies of Time that meshes really well with other things I’ve been thinking about.

In particular, I’ve been really interested in the book’s discussion of the methods used for understanding and recording knowledge. Even more interesting is the ways in which these techniques have evolved through time.  (For a book that claims to primarily be a history of the timeline, Cartographies does a magnificent job of covering many tools: lists, maps, charts, trees and graphs.)

As I’ve read, I’ve found myself enthralled to one particular question, namely: the records you keep and share seem to be uniquely connected to your mindset (a complex amalgam of education, experience, and circumstance), environment, and culture (particularly important is the effect of language).  As these things evolve, the substance of your thinking (and therefore your records and how you express them) also change in divers ways.

Given a rich intellectual and cultural environment, they can flower and spread.  In a barren landscape, the mode and presentation of thought can remain static for centuries.

Show me more… »

 | August 23, 2010 6:01 pm

The Internet is ForeverOver on ZDNet, Dana Blankenhorn, who writes the Linux and Open Source Blog, wrote an interesting piece entitled “We are all an open book.”  He was responding to something that Google CEO, Eric Schmidt, said last week:

Young people may one day have to change their names in order to escape their previous online activity.

To get his post started, Blankenhorn said that Schmidt’s comment may be the “dumbest thing said all year.”  And if you look at it superficially, it is.

There is small chance that all young people of the future will change their names to disown past mayhem.  (Or for that matter, that it would do any good.)  That’s just not how most people think.

If you drill down a level, though, you realize that the comment wasn’t stupid at all.  Or … maybe it’s better to say that the thinking behind the comment wasn’t stupid.  It shows that Schmidt (and thereby Google) is aware of two very powerful, and mutually exclusive, human desires – the hope for fame and the wish to preserve privacy – and that the web is requiring us to rethink our relationship to both.

Show me more… »