My interest in this material isn't in terms of trying to see if Hansen and the other stick players are doing things right. I'm more interested in seeing if processing the data in a simple and straightforward manner that makes sense to me gets results similar to what they are getting.
I've always had a great deal of trouble with the idea that a basic measurement like temperature is in need of massive computerized analysis. The temperature is the temperature, and it ought to be clear what's going on just by looking at the raw data, or something close to it.
So far we've seen that the differences between Hansen's dset0 and dset1 (raw and locally adjusted data respectively) are minimal when anomalies are calculated for each station or location. How about if the absolute temperatures are calculated?
So, I did this by averaging readings from the stations from dset0 in a given location together and then averaging all the locations from dset0 and then dset1 across years and then plotted the means to compare the two sets.
How much of the adjustment of dset0 is necessary if the two sets are so similar at the end of the adjustment process? Why build a gnarly black box of an analysis method if a simple and straightforward approach will do as well? When comparing dset0 and dset1 on a location by location basis the differences look pretty big in some cases, but apparently they don't make much difference overall. Perhaps adjustments are cancelling each other out.
Note that the plots look quite different from the plots of anomalies. The calculation of anomalies smooths the data considerably. This happens apparently because differences in locations are removed and many of the locations are not represented through the whole time period examined. The process of dealing with these difficulties is apparently handled by gridding the data, and for that I'll be looking into what McIntyre has found again.