Can you explain how this method works in general? Won't it depend on the scale of the histogram? For instance, if in this case the histogram was divided up into larger intervals (say, or width 4), then wouldn't your method give the result 7/6 ~ 1.17, which doesn't give B as the answer. And surely standard deviation doesn't change just because you change the scale on the graph... Or am I just completely misunderstanding your approach?
As I said, it's a very rough approach used to eyeball the question and I wouldn't recommend doing this often. Basically, the intention is to mentally divide the data up from the centre outwards so that the SDs can be approximately calculated. I'm afraid I'm not exactly sure what you meant.
If you meant what happens if each bar represented a difference of 2
oC rather than one, then the data would consist of 7 bars, where each bar would be the average of every two bars that currently exist (e.g. between 178
oC and 180
oC, it currently has two bars that are ~13% and ~21%, so this would be ~17% as one bar). Using the method I provided, there will be approximately 6 sections from the SDs, so each SD would be 7/6 of these new 2
oC difference bars away from the mean (or where you'd assume the mean to be according to the skew of the data and the median), which is ~180
oC and wouldn't have changed. Since each new bar represents 2
oC, you'd have to multiply the 7/6 by 2 (which I omitted in my previous post since anything multiplied by 1 is exactly the same), which should give ~2.33 again and rounds down to 2 as an integer. Does that make sense?
Note: I did some things whilst posting and realised Passbleh24 posted in between me typing this up and posting. Leaving this here anyway.