We have two projects now where we want/need to quantify lots of genes in each sample. For one we have the microarray experiments that need PCR validation, and then we are characterizing renal damage in some models we haven't used before. In total this is over 200 samples, and we are interested in at least 20 genes.
Of course we could just run our usual RT-RTPCRs in triplicate using 20µl reactions, but not only is that an enormous amount of pipetting, it is also horrendously expensive.
We were going to use the SuperArray, which we tried in a previous project, but it only simplifies the plating, you still have to pipett your reaction mix and samples into each of the wells. Then we found the Taqman Low Density Array, where you basically just pipett each sample once and it is run on (in our case) 48 different genes in a specialized PCR-machine. So the PCR machine is a problem, but luckily there was one available at the central lab for genomics that we could use.
I am looking forward to trying this new thing, using 1/10th the reagents and 1/48th the pipetting for half the prize. On the other hand, when it is this easy it really puts PCR into the realm of too simple to be called science. It still has to be done though.
Now I am just hoping for the same kind of automation in protein quantification. I have some hopes for this thing called MRM-MS, and it will soon be available at the central lab for proteomics just around the corner from our lab. More on that to come.
Yours,
Michael
Saturday, November 21, 2009
Thursday, October 29, 2009
Don't log your data twice
I am sitting here with some RNA expression microarray data from an Illumina Beadarray experiment. In general the data consist of a long [23k genes] list of spot intensities for each array in the experiment. If you want to compare any two of these lists in R using Bioconductor and the beadarray package you can run a small piece of code like this:
plotMAXY(exprs(BSData),
arrays = 1:2,
labels = targets$sampleID[1:2])
[plotMAXY is a plotting function; exprs extracts expressions from the BeadSummaryData (BSData); arrays tells the function which arrays to compare and labels tells it what to call them]
Then you get a graph like this:
Figure 1: On the lower left you see a XY-plot, where the intensities of all genes on array C1 are plotted against the intensities on array C2. The dashed lines show a two fold change in expression between arrays. To the upper right is the MA-plot (M is the ratio of intensities on the two arrays plotted on the y-axis, and A is the average intensity of a gene plotted on the x-axis).
What should strike you is the fact that one of the arrays appears to have a generally higher intensity than the other and that the difference changes with intensity, giving a banana-shaped curve. Some very smart people have shown that most of this is just an artifact of the method and that the average intensity of all genes in comparable samples should be the same and not vary with the expression. So what you do is normalise the data of all arrays to the same quantile distribution.
BSData.log2.quantile = normaliseIllumina(BSData,
method = "quantile",
transform = "log2")
[BSData.log2.quantile is the output variable; normaliseIllumina is the function that normalises data from Illumina arrays using the method given by method and performs the transformation given by transform; BSData is the BeadSummaryData]
The result should be a graph such as:
Figure 2: plotMAXY result after normalisation, when all is well.
You can see that the average intensities of the two arrays are now equal so that the cloud lies around the unity line in the XY-plot and the 0-line in the MA-plot. There is no longer any banana shape to the curve. But, there are a numer of spots outside the two-fold lines showing differentially expressed genes.
What I got when I plotted the normalised data this time was this plot:
Figure 3: Example of a plotMAXY result that shouldn't exist.
It's like, D..N. It shouldn't look like that. S..T! It's impossible that there should be such a curve. The XY-plot shows one sample to be impossibly much stronger than the other, but only at high intensities. What the F..K are the fold change lines doing at that kind of angle any way? The MA curve at least looks like it should, but show no differently expressed genes at all. B....Y V......E! Even technical replicates aren't that similar.
When I can hear myself above the beating of my heart and can see the screen again through the torrent of cold sweat pouring into my eyes I look more closely at the XY-plot and see that the scales on the x- and y-axis are different. Actually no intensities are higher than 4.
What has happened is that the plotMAXY function in log2-transforms the data by default before plotting (that's why the 2-fold line is called 1), and I already did the log2-transformation when I quantile-normalised the data. What I should have done is added the "log = FALSE" line to the plotMAXY, like this:
plotMAXY(exprs(BSData.log2.quantile),
arrays = 1:2,
labels = targets$sampleID[1:2],
log = FALSE)
[log = FALSE tells the function plotMAXY not to log2 transform the expression data from BSData.log2.quantile]
Now i get a graph more like figure 2 and can continue with my analysis without further risk of apoplexy. Or maybe not, data analysis is a forest fraught with formidable foes and fearsome formulas.
I leave you now to venture again into the jungle of statistical tricks.
Wish me luck,
Michael
plotMAXY(exprs(BSData),
arrays = 1:2,
labels = targets$sampleID[1:2])
[plotMAXY is a plotting function; exprs extracts expressions from the BeadSummaryData (BSData); arrays tells the function which arrays to compare and labels tells it what to call them]
Then you get a graph like this:
Figure 1: On the lower left you see a XY-plot, where the intensities of all genes on array C1 are plotted against the intensities on array C2. The dashed lines show a two fold change in expression between arrays. To the upper right is the MA-plot (M is the ratio of intensities on the two arrays plotted on the y-axis, and A is the average intensity of a gene plotted on the x-axis).
What should strike you is the fact that one of the arrays appears to have a generally higher intensity than the other and that the difference changes with intensity, giving a banana-shaped curve. Some very smart people have shown that most of this is just an artifact of the method and that the average intensity of all genes in comparable samples should be the same and not vary with the expression. So what you do is normalise the data of all arrays to the same quantile distribution.
BSData.log2.quantile = normaliseIllumina(BSData,
method = "quantile",
transform = "log2")
[BSData.log2.quantile is the output variable; normaliseIllumina is the function that normalises data from Illumina arrays using the method given by method and performs the transformation given by transform; BSData is the BeadSummaryData]
The result should be a graph such as:
Figure 2: plotMAXY result after normalisation, when all is well.
You can see that the average intensities of the two arrays are now equal so that the cloud lies around the unity line in the XY-plot and the 0-line in the MA-plot. There is no longer any banana shape to the curve. But, there are a numer of spots outside the two-fold lines showing differentially expressed genes.
What I got when I plotted the normalised data this time was this plot:
Figure 3: Example of a plotMAXY result that shouldn't exist.
It's like, D..N. It shouldn't look like that. S..T! It's impossible that there should be such a curve. The XY-plot shows one sample to be impossibly much stronger than the other, but only at high intensities. What the F..K are the fold change lines doing at that kind of angle any way? The MA curve at least looks like it should, but show no differently expressed genes at all. B....Y V......E! Even technical replicates aren't that similar.
When I can hear myself above the beating of my heart and can see the screen again through the torrent of cold sweat pouring into my eyes I look more closely at the XY-plot and see that the scales on the x- and y-axis are different. Actually no intensities are higher than 4.
What has happened is that the plotMAXY function in log2-transforms the data by default before plotting (that's why the 2-fold line is called 1), and I already did the log2-transformation when I quantile-normalised the data. What I should have done is added the "log = FALSE" line to the plotMAXY, like this:
plotMAXY(exprs(BSData.log2.quantile),
arrays = 1:2,
labels = targets$sampleID[1:2],
log = FALSE)
[log = FALSE tells the function plotMAXY not to log2 transform the expression data from BSData.log2.quantile]
Now i get a graph more like figure 2 and can continue with my analysis without further risk of apoplexy. Or maybe not, data analysis is a forest fraught with formidable foes and fearsome formulas.
I leave you now to venture again into the jungle of statistical tricks.
Wish me luck,
Michael
Thursday, October 15, 2009
12th Symposium of vascular Neuroeffector Mechanisms
I just got a heads-up on an interesting symposium in connection with WorldPharma 2010. It is the 12th Symposium of vascular Neuroeffector Mechanisms, put together by Pernille Hansen in Odense. You can find the website at www.vnm2010.org, it will give you a perfect reason for staying another day in Denmark at the very best time of year.
Now I just have to scrape together an abstract. I'm sure I saw some data here somewhere.
Yours,
Michael
Now I just have to scrape together an abstract. I'm sure I saw some data here somewhere.
Yours,
Michael
Monday, October 05, 2009
Telomers and Kidneys
Elizabeth H. Blackburn, Carol W. Greider & Jack W. Szostak were announced as the 2009 Nobel laureates in medicine or physiology: "for the discovery of how chromosomes are protected by telomeres and the enzyme telomerase" (nobelprize.org).
There are a 169 articles matching "kidney" AND "telomere" on Medline. So what is this Nobel Prize thing about for the nephrocentric community (except for a wide open field)?
Well, experimentally it is important to realize that rodents don't show telomere attrition. Their cells still stop dividing and go into senescence, actually using some of the same proteins, but their cells telomeres does not get shorter (Famulski & Halloran 2005).
In humans on the other hand there are some interesting findings. Telomere length in circulating leucocytes is correlated to kidney function in patients with chronic heart failure (CHF) (Wong et al 2009). That just says that greater biological age makes the kidneys more vulnarable in CHF, maybe even narrower, that greater biological age of the immune system puts the kidneys at risk. So, what about the telomeres in the kidneys?
They do shorten with age, and interestingly shorten more in the cortex than in the medulla (Melk et al 2000). I guess this can be tied to the higher metabolic activity in the cortex, and to the lower oxygen tension in the medulla. Decreased oxygen tension has namely been shown to protect against cellular senescence, although in fibroblasts, and not in the kidney (Betts et al 2008). The mechanism that is mostly branded about is the role of reactive oxygen species, which have been shown to play a role in Cyclosporin A induced renal cellular senescence in renal tubular cells (Jennings et al 2007), as well as angiotensin II in vascular smooth muscle (Herbert et al 2007). Finally telomere length has been shown to predict graft survival in transplant patients (Koppelstaetter et al 2008).
Well, that's pretty much it for this post. In short, rodents have telomers that don't get shorter with ageing. In humans they do, and they can be used to predict outcomes. Reactive oxygen species is the most important pathway for senescence and can probably be used to predict telomere out comes in humans from experimental models.
Yours,
Michael
There are a 169 articles matching "kidney" AND "telomere" on Medline. So what is this Nobel Prize thing about for the nephrocentric community (except for a wide open field)?
Well, experimentally it is important to realize that rodents don't show telomere attrition. Their cells still stop dividing and go into senescence, actually using some of the same proteins, but their cells telomeres does not get shorter (Famulski & Halloran 2005).
In humans on the other hand there are some interesting findings. Telomere length in circulating leucocytes is correlated to kidney function in patients with chronic heart failure (CHF) (Wong et al 2009). That just says that greater biological age makes the kidneys more vulnarable in CHF, maybe even narrower, that greater biological age of the immune system puts the kidneys at risk. So, what about the telomeres in the kidneys?
They do shorten with age, and interestingly shorten more in the cortex than in the medulla (Melk et al 2000). I guess this can be tied to the higher metabolic activity in the cortex, and to the lower oxygen tension in the medulla. Decreased oxygen tension has namely been shown to protect against cellular senescence, although in fibroblasts, and not in the kidney (Betts et al 2008). The mechanism that is mostly branded about is the role of reactive oxygen species, which have been shown to play a role in Cyclosporin A induced renal cellular senescence in renal tubular cells (Jennings et al 2007), as well as angiotensin II in vascular smooth muscle (Herbert et al 2007). Finally telomere length has been shown to predict graft survival in transplant patients (Koppelstaetter et al 2008).
Well, that's pretty much it for this post. In short, rodents have telomers that don't get shorter with ageing. In humans they do, and they can be used to predict outcomes. Reactive oxygen species is the most important pathway for senescence and can probably be used to predict telomere out comes in humans from experimental models.
Yours,
Michael
Wednesday, September 30, 2009
Combining clinical and experimental careers
Once upon a time in the last century I was admitted to the medical school at Uppsala university; to a special branch program for just twelve students who thought they wanted to do research as well as become Real Doctors(TM). That program was canceled just a few years later because not enough people went into basic research.
I did, but, like the cellophane man, I am all but invisible. After having done research exclusively for a couple of years I am now trying to get back to clinical work and hopefully work out a combination of sorts. This may sound stupid to you (as it should), it even sounds stupid to me. As of today I know no one in my generation who has done it successfully, and I know of very few in the earlier generation. Most whom were already consultants when they started doing research, and produce somewhere upwards of half a day per week of clinical work.
That may be enough for a consultant but as a junior doctor working half a day a week will get you nowhere. Possibly make you dangerous. I have been considering a fifty/fifty arrangement, but how do you compete for grants while putting in only half the work of the competition? The result will inevitably be that I become a cellophane man in research as well.
Ignoring my all knowing, prescient and synical self I set out to get such a fifty/fifty position.
Damn it's hard. I didn't really mean all that about being a cellophane man, but damn. I thought I had been working pretty hard, writing papers and doing generally important stuff, but what does the hospital think about that?
Me: - I have done some really interesting research.
Hosp. - So, you haven't even worked as a temp in the ER?
Me: - But I have external funding.
Hosp. - So, you haven't even worked as a temp in the ER?
Me: -mmmf...
Then my limbs start sticking toghether like cellophane does and I crumple into a small ball of see-through plastic.
So now I am trying to juggle research and grant writing with getting a temporary position as a clinician so that I can apply for a proper job at some future time.
Wish me luck,
Michael
I did, but, like the cellophane man, I am all but invisible. After having done research exclusively for a couple of years I am now trying to get back to clinical work and hopefully work out a combination of sorts. This may sound stupid to you (as it should), it even sounds stupid to me. As of today I know no one in my generation who has done it successfully, and I know of very few in the earlier generation. Most whom were already consultants when they started doing research, and produce somewhere upwards of half a day per week of clinical work.
That may be enough for a consultant but as a junior doctor working half a day a week will get you nowhere. Possibly make you dangerous. I have been considering a fifty/fifty arrangement, but how do you compete for grants while putting in only half the work of the competition? The result will inevitably be that I become a cellophane man in research as well.
Ignoring my all knowing, prescient and synical self I set out to get such a fifty/fifty position.
Damn it's hard. I didn't really mean all that about being a cellophane man, but damn. I thought I had been working pretty hard, writing papers and doing generally important stuff, but what does the hospital think about that?
Me: - I have done some really interesting research.
Hosp. - So, you haven't even worked as a temp in the ER?
Me: - But I have external funding.
Hosp. - So, you haven't even worked as a temp in the ER?
Me: -mmmf...
Then my limbs start sticking toghether like cellophane does and I crumple into a small ball of see-through plastic.
So now I am trying to juggle research and grant writing with getting a temporary position as a clinician so that I can apply for a proper job at some future time.
Wish me luck,
Michael
Saturday, September 26, 2009
ESH Summer School
I am just back from the Summer School of the European Society of Hypertension, and this time it will be a timely post. The summer school was held in Smolenice in Slovakia quite close to the Austrian border.
It was graced by some 60 school children (mostly cardiologists around 30 years of age) and some very well prepared speakers (there were some of the other kind of speakers as well, but that is a secret). The main clinical questions in hypertension were very well handled, with presentations of overwhelming evidence. As is usual at any scientific meeting, very few of the presenters showed us any remorse and the coffee breaks were often quite a bit shorter than planned.
The physiology of hypertension was not as thoroughly covered, but the most important bases were at least touched upon. Among those, the crucial role of the kidney in hypertension (it being the most important organ and all that) was quite adequately presented. Micro vascular disease was one of the few areas that was well covered both from a clinical and an experimental perspective.
As with all schools there was an ample supply of suggested reading. Some 60 articles, and some books. Having learned nothing from my previous 25 years of schooling, I obviously thought that everyone would have printed and read all these beforehand. Consequently I felt a little bad that I had not. Need I say that there was no need for worry.
In the end we were 60 doctors and scientist locked away in a castle that would have made Disney (or Dracula) proud. The social program was quite intensive. Luckily the beer at the local pizzeria was maybe the cheapest in all of Europe and it was quite strong as well. In combination with a too long trip home, I washung over jet lagged for three days.
I can only recommend you to go to the ESH summer school if you are in any way interested in hypertension and have the opportunity (your national hypertension society is the way in). It will be in Rovinj, Croatia in 2010 (the beer should be cheap), and in Barcelona 2011.
Yours,
Michael Hultström
It was graced by some 60 school children (mostly cardiologists around 30 years of age) and some very well prepared speakers (there were some of the other kind of speakers as well, but that is a secret). The main clinical questions in hypertension were very well handled, with presentations of overwhelming evidence. As is usual at any scientific meeting, very few of the presenters showed us any remorse and the coffee breaks were often quite a bit shorter than planned.
The physiology of hypertension was not as thoroughly covered, but the most important bases were at least touched upon. Among those, the crucial role of the kidney in hypertension (it being the most important organ and all that) was quite adequately presented. Micro vascular disease was one of the few areas that was well covered both from a clinical and an experimental perspective.
As with all schools there was an ample supply of suggested reading. Some 60 articles, and some books. Having learned nothing from my previous 25 years of schooling, I obviously thought that everyone would have printed and read all these beforehand. Consequently I felt a little bad that I had not. Need I say that there was no need for worry.
In the end we were 60 doctors and scientist locked away in a castle that would have made Disney (or Dracula) proud. The social program was quite intensive. Luckily the beer at the local pizzeria was maybe the cheapest in all of Europe and it was quite strong as well. In combination with a too long trip home, I was
I can only recommend you to go to the ESH summer school if you are in any way interested in hypertension and have the opportunity (your national hypertension society is the way in). It will be in Rovinj, Croatia in 2010 (the beer should be cheap), and in Barcelona 2011.
Yours,
Michael Hultström
Wednesday, September 09, 2009
Experiment planning - logistics, or how to keep track of all the stuff
After I have had a good run in one experiment I often plan for too much work to be done in too little time in the next one. Vice versa, after running an experiment where everything goes wrong and takes double the time it should; I tend to plan rather less work per day, often leading to dead air and general inefficiency.
I have a feeling that this will repeat it self in absurdum.
This thought got me thinking of one of the hardest, least often discussed and maybe most important parts of the scientific endeavour, logistics. That is, who does what when and what does he (or she) need to do it.
"Logistics, I don't know nothing about these logistics, are but I want some." General Patton (from http://www.chuckhawks.com/logistics.htm). Actually I was looking for a source for the more well known quote: "Amateurs talk tactics, professionals talk logistics," but it seems to be too ubiquitous to have a known origin.
At the moment I am working with large hand-drawn sheets with one column for each experiment/project and one row per week. This works because you can see what is to be done every week and you can see all the projects at once. On the otherhand, it sucks because it can't be emailed or shared over the internet and is difficult to change or update. What I would like would be a scheduling and to-do system that was computer based, and had very good visualisation possibilities.
I have tried to work with google calendar, which is very easy to share, but it is not so good for keeping track of multiple projects. Keeping them each in a separate calendar is a horrible solution.
I guess it is possible in org-mode, but I haven't really gotten that far with it and visualisation probably has to be implemented separately (which could be fun, but which I don't have the time for).
So, does anyone have a good system for project tracking that would be appropriate for experimental science?
Yours,
Michael
I have a feeling that this will repeat it self in absurdum.
This thought got me thinking of one of the hardest, least often discussed and maybe most important parts of the scientific endeavour, logistics. That is, who does what when and what does he (or she) need to do it.
"Logistics, I don't know nothing about these logistics, are but I want some." General Patton (from http://www.chuckhawks.com/logistics.htm). Actually I was looking for a source for the more well known quote: "Amateurs talk tactics, professionals talk logistics," but it seems to be too ubiquitous to have a known origin.
At the moment I am working with large hand-drawn sheets with one column for each experiment/project and one row per week. This works because you can see what is to be done every week and you can see all the projects at once. On the otherhand, it sucks because it can't be emailed or shared over the internet and is difficult to change or update. What I would like would be a scheduling and to-do system that was computer based, and had very good visualisation possibilities.
I have tried to work with google calendar, which is very easy to share, but it is not so good for keeping track of multiple projects. Keeping them each in a separate calendar is a horrible solution.
I guess it is possible in org-mode, but I haven't really gotten that far with it and visualisation probably has to be implemented separately (which could be fun, but which I don't have the time for).
So, does anyone have a good system for project tracking that would be appropriate for experimental science?
Yours,
Michael
Friday, September 04, 2009
Report from SPS annual meeting in Uppsala
This was meant to be a timely post, but that is apparently not my forte. It was also meant to include some interesting science, but I thought it was time to get this out, seeing that the next meeting is just around the corner.
Now it is aweek couple of weeks month since the annual meeting of the Scandinavian Physiological Society, which was held in Uppsala, Sweden this year. The meeting provided the best renal sessions I have visited so far at a SPS meeting. This obviously does honour to the Kidney Resösh Group [sic] in Uppsala.
By far the best part of the meeting was the extra curricular activities. I got to meet up with many of my old friends and colleagues in Uppsala and then we got drunk together.
So thanks to everyone. I hope to see you again next year, maybe already at Experimental Biology.
/Michael
Now it is a
By far the best part of the meeting was the extra curricular activities. I got to meet up with many of my old friends and colleagues in Uppsala and then we got drunk together.
So thanks to everyone. I hope to see you again next year, maybe already at Experimental Biology.
/Michael
Thursday, September 03, 2009
Pilot experiments: Wasting your time
What is a pilot experiment and when should you do it? More importantly, when should you not do it?
In my mind there are two kinds of pilot experiments. Either it's something you do as you go by while investigating something else, like testing another agonist in a model you're already working with. In that case, yes. You should do it. That's how I have found lot's of the interesting things that I eventually got published.
The other kind, the kind where you have something you are really interested in but decide to save time or money by doing a small first series. This is the kind of pilot experiment that you should never do. What you will end up with is either a positive result that will need additional experiments to be convincing, or a negative result that is not quite certain.
There is actually a third kind. The kind where you do some experiments and have to add new groups, controls or what ever, after the rest of the experiment is done. This is in many ways like the second kind, but can be harder to avoid. To some degree you can avoid it by always insisting on running a full experiment, with all reasonable controls. The problem is that this is very work intensive, and may often not be necessary. Or, your reviewers may favour other control groups than you do. In anycase case, once you find yourself there, your only recourse is to suck it up and hope that the baseline values doesn't end up being too much different.
Did I just do it?
You bet.
Sucks to be me (as a friend of mine would say).
Michael
In my mind there are two kinds of pilot experiments. Either it's something you do as you go by while investigating something else, like testing another agonist in a model you're already working with. In that case, yes. You should do it. That's how I have found lot's of the interesting things that I eventually got published.
The other kind, the kind where you have something you are really interested in but decide to save time or money by doing a small first series. This is the kind of pilot experiment that you should never do. What you will end up with is either a positive result that will need additional experiments to be convincing, or a negative result that is not quite certain.
There is actually a third kind. The kind where you do some experiments and have to add new groups, controls or what ever, after the rest of the experiment is done. This is in many ways like the second kind, but can be harder to avoid. To some degree you can avoid it by always insisting on running a full experiment, with all reasonable controls. The problem is that this is very work intensive, and may often not be necessary. Or, your reviewers may favour other control groups than you do. In anycase case, once you find yourself there, your only recourse is to suck it up and hope that the baseline values doesn't end up being too much different.
Did I just do it?
You bet.
Sucks to be me (as a friend of mine would say).
Michael
Monday, August 31, 2009
Monday meetings, more than oversight.
The Monday Meeting(tm), or weekly meetings on another day, is one of the fundaments of labs and research groups everywhere. It can take on any number of guises. Here I will outline how we organize our Monday meetings now and how we arrived there.
When Our Enlightened Despot spent his first sabbatical in the US he came home with the idea of the Monday meeting.
It basically consisted of the Peons reporting what they did last week, and planned to do the coming week. People showed up when they had time and our Illustrious Clinicians never had that time. Mostly because 9.00am Monday morning is a busy time in the clinic. Every now and then someone was assigned to present some data. This was usually done in the form of graphs on paper, with the intro of: "In the 2004-03 project the expression of pro-col-n-pep was increased... as you see here." Obviously no one understood why this might be important or even interesting, but it was done anyway.
Three (or four) years ago the scientific presentations and discussions were moved to Fridays, and there was a series of very interesting Friday seminars. However, with two meetings per week, attendance was down and there were still no clinicians to be seen.
Two years ago we set a schedule with fifteen minute long scientific presentations every monday. These were supposed to be either brief reports of recent results, or a review presentation of a relevant area. In either case they should be properly prepared and formatted presentation, with proper introduction, hypothesis, methods and results. With this more strict setup we have managed to keep everyone in the loop, the lab rats can understand the archaic ways of epidemiological research and the clinicians get the background needed to understand the importance of the genetic expression of the Gene of the Day(tm).
Meeting at nine in the morning still did not work great for our clinicians, so last year we moved the meeting to Mondays at 14.00 (2.00pm). It is a fairly low intensity time for clinicians, but it does destroy the afternoons lab work for those so inclined. We have a short progress report where everyone says what they are working with, and we have one fifteen minute scientific presentation every Monday.
We have managed to keep attendance high and are well on the way to creating a common knowledge base in the whole group, so that the discussion can focus more on the science than on the peculiarities of the different fields.
How do you do it at your lab?
Does it work well or...?
Yours,
Michael
When Our Enlightened Despot spent his first sabbatical in the US he came home with the idea of the Monday meeting.
It basically consisted of the Peons reporting what they did last week, and planned to do the coming week. People showed up when they had time and our Illustrious Clinicians never had that time. Mostly because 9.00am Monday morning is a busy time in the clinic. Every now and then someone was assigned to present some data. This was usually done in the form of graphs on paper, with the intro of: "In the 2004-03 project the expression of pro-col-n-pep was increased... as you see here." Obviously no one understood why this might be important or even interesting, but it was done anyway.
Three (or four) years ago the scientific presentations and discussions were moved to Fridays, and there was a series of very interesting Friday seminars. However, with two meetings per week, attendance was down and there were still no clinicians to be seen.
Two years ago we set a schedule with fifteen minute long scientific presentations every monday. These were supposed to be either brief reports of recent results, or a review presentation of a relevant area. In either case they should be properly prepared and formatted presentation, with proper introduction, hypothesis, methods and results. With this more strict setup we have managed to keep everyone in the loop, the lab rats can understand the archaic ways of epidemiological research and the clinicians get the background needed to understand the importance of the genetic expression of the Gene of the Day(tm).
Meeting at nine in the morning still did not work great for our clinicians, so last year we moved the meeting to Mondays at 14.00 (2.00pm). It is a fairly low intensity time for clinicians, but it does destroy the afternoons lab work for those so inclined. We have a short progress report where everyone says what they are working with, and we have one fifteen minute scientific presentation every Monday.
We have managed to keep attendance high and are well on the way to creating a common knowledge base in the whole group, so that the discussion can focus more on the science than on the peculiarities of the different fields.
How do you do it at your lab?
Does it work well or...?
Yours,
Michael
Saturday, August 29, 2009
The lady tasting tea
Statistics is one of the cornerstones of modern science, the other three being Blood, Sweat and Tears.
First, it is important to distinguish between the different things that are called statistics. There is mathematical statistics, or so I have heard, and there is applied statistics, which is what we are generally talking about. In applied statistics there is descriptive statistics, which we call plotting, and finally inferential statistics.
Inferential statistics is the area that most (junior) scientists mean when they say that they hate statistics. It is the area of hypothesis testing or finding of significant differences, and what people hate about it is that it is never clear which test you should use.
You will have learnt that you should predefine the tests you are looking for and design the experiment after that. The problem lies in having a too theoretical understanding of statistics and not having enough experience with what ever kind of data you possess. This means that you have not been able to predict what kind of analysis gives the most power with the data you end up with.
In the end, the first time you handle any type of dataset has to be considered an exploratory investigation. And, it is important to accept that and really get to know how the data responds to analysis and what kind of results you can get out of it. Then, next time, you can predetermine the correct type of analysis, and feel like you are on higher ground, theoretically.
So, how do you get to understand statistics, or at least start to understand it?
I have found no statistics text books that are worth while in the beginning. None. Many are useful once you have understood something.
The same goes for statistics courses. They are just not useful, especially not for understanding statistics. The most useful courses I have had in statistics were those that focused on a specific problem and a specific approach, i.e. "Using R and Bioconductor for the analysis of mRNA expression micro arrays". Even with those, the real help with understanding comes from working with your own data.
I suggest two things:
Michael Hultström
First, it is important to distinguish between the different things that are called statistics. There is mathematical statistics, or so I have heard, and there is applied statistics, which is what we are generally talking about. In applied statistics there is descriptive statistics, which we call plotting, and finally inferential statistics.
Inferential statistics is the area that most (junior) scientists mean when they say that they hate statistics. It is the area of hypothesis testing or finding of significant differences, and what people hate about it is that it is never clear which test you should use.
You will have learnt that you should predefine the tests you are looking for and design the experiment after that. The problem lies in having a too theoretical understanding of statistics and not having enough experience with what ever kind of data you possess. This means that you have not been able to predict what kind of analysis gives the most power with the data you end up with.
In the end, the first time you handle any type of dataset has to be considered an exploratory investigation. And, it is important to accept that and really get to know how the data responds to analysis and what kind of results you can get out of it. Then, next time, you can predetermine the correct type of analysis, and feel like you are on higher ground, theoretically.
So, how do you get to understand statistics, or at least start to understand it?
I have found no statistics text books that are worth while in the beginning. None. Many are useful once you have understood something.
The same goes for statistics courses. They are just not useful, especially not for understanding statistics. The most useful courses I have had in statistics were those that focused on a specific problem and a specific approach, i.e. "Using R and Bioconductor for the analysis of mRNA expression micro arrays". Even with those, the real help with understanding comes from working with your own data.
I suggest two things:
- Read the wonderful book "The lady tasting tea" by David Salsburg describes the development of statistics as a science in a most delightful way. He manages to present many of the basic ideas of statistics without being technical. I can only recommend it, especially if you like the history of science.
- Start using a statistics program that can help you understand what you do. I suggest R. It's free, it's what all the statisticians use (so you can ask them how to do something), it includes a manual, and it includes original references to many of the methods.
Michael Hultström
Failed experiment depression
How depressed should you get when an experiment fails?
I have a tendency to get a bit blue whenever an experiment goes south. This generally takes the form of an immediate re-run and a panic-like reading of lab journals to find any possible error. If it doesn't resolve directly, I usually come up with a plausible explanation and try to confirm it with my colleagues. When I have a decent plan for how to continue I generally feel better, but not happy by any measure. I will often continue my ruminations for days, often until the experiment has finally succeded and sometimes even after it is done, finished and published.
Maybe this is a Good Thing(TM). As recently reported on Slashdot from an article in Scientific American, and an original article in Evolutionary Psychology in 2007, depression may serve to focus attention at the causative incident. From an evolutionary perspective this would potentially allow for the quick resolution of otherwise intractable problems.
So, when my western blot failed (no discernerable bands at al) on friday I immediatly went into a panic. Tried another ECL, tried an extremely sensitive ECL. Re-washed and re-applied the secondary antibody etc. It still didn't work. The problem is probably that the secondary antibody is dead. It was an all new batch, and it was stuck in transit, at room temperature, for almost a week.
Anyway the failed western blot soundly destroyed my Friday and my week end. Not to mention that I can't think about anything else and that I therefore have to write this blog instead of working on my thesis introduction or the other parts of the paper where the western blot will be included once it works.
If Andrews et al is correct this may be a well adapted response. However, I think this demands that one can continue working on the problem until it is solved (damn these week ends). When you have lots of other things to do even the small depression caused by a failed western blot can severly sabotage your schedule.
My vain hope is that now that I know about it I may be able to compensate. At least a little.
Michael
I have a tendency to get a bit blue whenever an experiment goes south. This generally takes the form of an immediate re-run and a panic-like reading of lab journals to find any possible error. If it doesn't resolve directly, I usually come up with a plausible explanation and try to confirm it with my colleagues. When I have a decent plan for how to continue I generally feel better, but not happy by any measure. I will often continue my ruminations for days, often until the experiment has finally succeded and sometimes even after it is done, finished and published.
Maybe this is a Good Thing(TM). As recently reported on Slashdot from an article in Scientific American, and an original article in Evolutionary Psychology in 2007, depression may serve to focus attention at the causative incident. From an evolutionary perspective this would potentially allow for the quick resolution of otherwise intractable problems.
So, when my western blot failed (no discernerable bands at al) on friday I immediatly went into a panic. Tried another ECL, tried an extremely sensitive ECL. Re-washed and re-applied the secondary antibody etc. It still didn't work. The problem is probably that the secondary antibody is dead. It was an all new batch, and it was stuck in transit, at room temperature, for almost a week.
Anyway the failed western blot soundly destroyed my Friday and my week end. Not to mention that I can't think about anything else and that I therefore have to write this blog instead of working on my thesis introduction or the other parts of the paper where the western blot will be included once it works.
If Andrews et al is correct this may be a well adapted response. However, I think this demands that one can continue working on the problem until it is solved (damn these week ends). When you have lots of other things to do even the small depression caused by a failed western blot can severly sabotage your schedule.
My vain hope is that now that I know about it I may be able to compensate. At least a little.
Michael
Thursday, August 27, 2009
First recorded nephrology experiment?
I like finding original documentation for the facts in science that are generally accepted, and for which experimental evidence is seldom or never given. I hope to write a series of these small notes concerning the history of experimental renal physiology.
One can assume that the association between the kidneys and urine production has been known since the first time pre-historic man, or woman, killed an animal and ate the kidneys. There has however been considerable discussion through the ages what the relative roles of the kidneys, the urethers and the bladder actually are.
Around 360 B.C. Plato wrote in "Timaeus" that:
Somewhere around the same time Aristotle has a surprisingly correct description in "On the parts of animals":
Quite a bit later Galen or Claudius Galenus describes what I believe is the oldest surviving description of a scientific experiement in renal physiology. Seeking to show that the urine is produced in the kidneys and passed through the urethers to the bladder, Galen writes:
Do you know of any older records?
Please chime in,
Michael
One can assume that the association between the kidneys and urine production has been known since the first time pre-historic man, or woman, killed an animal and ate the kidneys. There has however been considerable discussion through the ages what the relative roles of the kidneys, the urethers and the bladder actually are.
Around 360 B.C. Plato wrote in "Timaeus" that:
“The outlet for drink by which liquids pass through the lung under the kidneys and into the bladder, which receives and then by the pressure of the air emits them...” (from the Internet Classics Archive at MIT, translated by Benjamin Jowett.)He then goes on to describe the use of the actual outlet in sexual intercourse.
Somewhere around the same time Aristotle has a surprisingly correct description in "On the parts of animals":
"A pair of stout ducts, void of blood, run, one from the cavity of each kidney, to the bladder; and other ducts, strong and continuous, lead into the kidneys from the aorta.However, even though some dissection must have taken place, neither Plato nor Aristotle describe the methods behind their conclusions. I have read somewhere that it was not fashionable to refer too closely to the real world at the time. The world of ideas (platonic) was thought to exist independently of the real world and all important conclusions could be arrived at through contemplation and deduction.
The purpose of this arrangement is to allow the superfluous fluid to pass from the blood-vessel into the kidney, and the resulting renal excretion to collect by the percolation of the fluid through the solid substance of the organ, in its center, where as a general rule there is a cavity.
From the central cavity the fluid is discharged into the bladder by the ducts that have been mentioned, having already assumed in great degree the character of excremental residue." (Part 9. Also from the Internet Classics Archive, translated by William Ogle.)
Quite a bit later Galen or Claudius Galenus describes what I believe is the oldest surviving description of a scientific experiement in renal physiology. Seeking to show that the urine is produced in the kidneys and passed through the urethers to the bladder, Galen writes:
"Now the method of demonstration is as follows. One has to divide the peritoneum in front of the ureters, then secure these with ligatures, and next, having bandaged up the animal, let him go (for he will not continue to urinate). After this one loosens the external bandages and shows the bladder empty and the ureters quite full and distended- in fact almost on the point of rupturing; on removing the ligature from them, one then plainly sees the bladder becoming filled with urine." ("On the Natural Faculties", book 1, chapter 13, from the Internet Classics Archive, translated by Arthur John Brock).It is ofcourse not a very humane experiment given that anaesthesia was not developed for another 1600 years or so. I have not found any date of publication, but given that he lived from year 129 to year 200 it is the oldest fairly solid experimental description in nephrology I have been able to find.
Do you know of any older records?
Please chime in,
Michael
Tuesday, August 25, 2009
Nephrophysiologist, but why?
This was meant to be the first post, but it was much harder to come up with an honest "artist's statement", as Chemiotics calls it, than I thought.
Can there really be an interest for a blog about renal physiology? Any really interesting stuff will be published anyway, and if it is blogged, that may count as prior publication. What is left is to discuss ideas in general terms, published work, methods, and the practice of physiology. However, I believe some interesting discussions may be had in these areas. There are many insights that aren't appropriate for full publication that may be aired in a blog much in the way these kind of things are aired and discussed at conferences and meetings.
My hope is to keep the discussion going between the all to far apart meetings in Real Life(tm). There are some science bloggers out there in not too far away fields. I hope to tie up with these early adopters of this brave new world 2.0, there are many things that different fields of science have in common. Maybe I can even induce a couple of new ones to join in.
So, please, write a comment, start a blog and let's have a discussion.
Yours,
Michael Hultström
Can there really be an interest for a blog about renal physiology? Any really interesting stuff will be published anyway, and if it is blogged, that may count as prior publication. What is left is to discuss ideas in general terms, published work, methods, and the practice of physiology. However, I believe some interesting discussions may be had in these areas. There are many insights that aren't appropriate for full publication that may be aired in a blog much in the way these kind of things are aired and discussed at conferences and meetings.
My hope is to keep the discussion going between the all to far apart meetings in Real Life(tm). There are some science bloggers out there in not too far away fields. I hope to tie up with these early adopters of this brave new world 2.0, there are many things that different fields of science have in common. Maybe I can even induce a couple of new ones to join in.
So, please, write a comment, start a blog and let's have a discussion.
Yours,
Michael Hultström
Prevalence of hypertension in Norway
The Norwegian society for hypertension got a surprisingly difficult question: What is the prevalence of hypertension per age group in Norway?
You would certainly think there was a straight forward answer to that question, and that we should know it. You don't even predict the question, because you think it has to be published in an easily accessible form. Any way, it is not. The best current evidence is only published in Norwegian, so in the interest of the hypertension community here is the short story.
2000-2003 the Norwegian institue of public health performed a health survey in selected counties and this was published in the journal of the norwegian medical association(Pubmed: Tidsskr Nor Laegeforen. 2007 Oct 4;127(19):2537-41.). The data is in table 4. What it says is that 17.4 % of men and 3.2 % of women aged 30 had a systolic blood pressure higher than 140 mmHg and/or a diastolic blood pressure higher than 90 mmHg. The number of hypertensives gradually increased to 65.4 % of men and 69% of women at the age of 75 years.
Thanks to my collegues in the Hypertension society for helping me find the data. Especially Randi Selmer at the Norwegian institute of public health who provided (and co-authored) the paper.
In the coming year we can look forward to the publication of the whole population data from the health survey in the county Nord-Trøndelag study number 3 (HUNT-3). There still will not be a definitive answer to the question of the prevalence of hypertension in Norway, but age group coverage will be down to 13 years of age.
So, as far as I can see, I have a 82.6 % chance of not having hypertension (not counting that I am rather stressed at the moment, which would bring it down to -10% somewhere).
/Michael
You would certainly think there was a straight forward answer to that question, and that we should know it. You don't even predict the question, because you think it has to be published in an easily accessible form. Any way, it is not. The best current evidence is only published in Norwegian, so in the interest of the hypertension community here is the short story.
2000-2003 the Norwegian institue of public health performed a health survey in selected counties and this was published in the journal of the norwegian medical association(Pubmed: Tidsskr Nor Laegeforen. 2007 Oct 4;127(19):2537-41.). The data is in table 4. What it says is that 17.4 % of men and 3.2 % of women aged 30 had a systolic blood pressure higher than 140 mmHg and/or a diastolic blood pressure higher than 90 mmHg. The number of hypertensives gradually increased to 65.4 % of men and 69% of women at the age of 75 years.
Thanks to my collegues in the Hypertension society for helping me find the data. Especially Randi Selmer at the Norwegian institute of public health who provided (and co-authored) the paper.
In the coming year we can look forward to the publication of the whole population data from the health survey in the county Nord-Trøndelag study number 3 (HUNT-3). There still will not be a definitive answer to the question of the prevalence of hypertension in Norway, but age group coverage will be down to 13 years of age.
So, as far as I can see, I have a 82.6 % chance of not having hypertension (not counting that I am rather stressed at the moment, which would bring it down to -10% somewhere).
/Michael
Subscribe to:
Posts (Atom)