Anecdotal evidence is popular in medicine. Obviously it's not called that. It's called a "case report" or an "observational study". Basically you see a number of patients and then you pick one that supports some notion you have and put it on a pedestal. Another manner of doing it is to collect a number of these cases and call them a case-series. The next level is to collect all your cases and dig around until you find some common feature in all of them or in a sub-population. Finally you get to the level of looking at all people with a certain condition, usually nationality and some disease, and then you can call it "epidemiology". The problem with all of these designs is that if you look hard enough you can find something interesting-looking in any collection of data. This is especially true if you don't care what you are looking for.
The next level of scientific quality is to care what you are looking for from the beginning. This means you pick a group of people and say that, for example: "I believe those with higher blood pressure now will die sooner than those with lower", and then you wait. After some time, in the best case a predetermined time, you check how many have died and draw your conclusions. Such a conclusion might be that high blood pressure is a risk factor for death. This is not necessarily such a bad way of doing science. The problem is that then you start muddling through your data to look for a reason for the difference in death rates, and then you are suddenly producing anecdotal evidence again.
The highest level of evidence is not to use a found population with some difference, but to use a homogenous population and induce a change in some of them while comparing them to the rest. This is experimental science and it is the only way to show causal relationships. You still have to preconceive what you are looking for, otherwise you are still just muddling through the data looking for differences. This means that an experimental study is only valid as such for the question it was designed to answer. If you take the population in an experimental study and look at something else then you are back to doing an observational study, producing anecdotal evidence.
So, what is the problem? Anecdotal evidence often turns out to be correct. This is how much science comes about; you have an anecdote (say a case study, or some epidemiological finding, or a post-hoc analysis of an experimental study) that leads you to a hypothesis. Then you test the hypothesis with an experimental study to see if it holds up. Easy. This is how it is supposed to be. The problem is that anecdotal evidence is often taken for true directly (by the media, policy makers and the public at large), without further testing, and the later experimental evidence does not get the same attention even though the science is better.
For scientists this is not as big a problem as for the public. By following the field you learn what questions are being asked and which study was designed to answer which question. With clinical trials this is solved today by publication of the protocol and hypothesis before the study starts. Pre-clinical papers on the other hand are mostly experimental in nature, but they are often written so that it is not clear if the conclusion is based on the original question or if it is something the authors picked up on the way. This is because there is not enough space to tell the story as it happened, and that it frankly wouldn't be that good a read. The principle is that you take what you have and write the best story possible. That's how to get published. What it also means is that, unless you know the investigators and what their main focus is, you don't know if the study was a preconceived experimental study or a post-hoc study.
Getting know the investigators in your field means going to a lot of conferences and listening to a lot of talks. That's when it is a good thing to have some online comics to fall back to. Because falling asleep is embarrassing.