Contacts: (existing members):  science@eastlothianu3a.org.uk

Contacts: (prospective new members):  science.new.members@eastlothianu3a.org.uk

When:

Monthly – 4th Monday of the month 2.00 pm – 4.00 pm, September – May (excluding December)

Where:

The Star Room, John Gray Centre, Haddington

 

images

Clever Chemists !

Unknown
U3A Theoretical Physics Group Leader?

 

DNA
20 of the 3 billion bases in human DNA!

 

                                                       

 

Merlin study in Lammermuirs

 

 

Kings Buildings visit: stylish paper lab coats

 

 

 

Programme for 2023/2024  – Talks at 2.00 pm, the Star Room, John Gray Centre, Haddington 

Date Speaker Topic
25/09/23 Michael Williams (Eaglescairnie) Green Farming (Possible Visit)
23/10/23 Gordon McInnes (Member) What’s Happening to the Environment
27/11/23  Dr Alan Bridger The Square Kilometre Array Radio telescope
22/01/24
26/02/24
25/03/24
22/04/24  
27/05/24 All Members’ Forum

 

 

 

 

 

 

 

 

 

 

 

 

                                                                        2021/2022

We returned to face to face meetings in September. The venue is the Upper Hall at the Bridge Centre with Covid precautions in place. Numbers have initially been restricted to 18 plus speaker, but this remains under review.

Is There Anyone Out There?           25 October 2021

Our own Laura Mole gave us a fascinating talk on one of the fundamental questions posed by we humans on this tiny little fleck of matter in our vast universe. Our planet is the ‘Pale Blue Dot’ of the iconic photograph taken by Voyager 1 as it sped beyond the orbit of Neptune, a picture which emphasises our total insignificance in the immensity of space.

Laura began her talk with a look at a definition of ‘life’ and the elements of which the life on Earth is composed. Specifically, all life we know about is based on carbon, and requires a number of other elements (such as hydrogen, oxygen, nitrogen, sulphur and phosphorus) which, together with carbon, form the molecules on which life is based. Life can be defined in terms of the ability to self-replicate and to undergo a process of evolution. With the exception of viruses,  living organisms exhibit some form of cellular structure: compartments separated from the local environment by a wall or membrane within which multitudes of interconnected chemical reactions comprise metabolism. Life is ubiquitous on Earth: found on land, sea and air most obviously. However, there are many microorganisms which live in extreme environments such as hot volcanic springs, hydrothermal vents on the ocean floor (high pressure as well as high temperatures), very alkaline or very high salinity lakes, geothermal muds, very dry conditions, very cold conditions, and so forth. Such organisms are known as extremophiles. But, wherever they are found, all life forms on Earth depend on liquid water.

Life is likely to have originated on Earth about 4 billion years ago, although the earliest evidence in the fossil record is of a colonial microorganism from about 3.5 billion year old rocks. The fossils are of stromatolites, reef-like accumulations of hard mineral secretions produced by the organisms. Stromatolites still exist today. How life may have originated on Earth can only be postulated, but many experimental investigations have been conducted over the years. Conditions on Earth at the time were very different from now, with an atmosphere consisting predominantly of methane, ammonia, water vapour and carbon dioxide – no oxygen. In the 1950s Stanley Miller and his colleague Harold Urey conducted an experiment in which they created a gas mixture like that proposed for the early Earth, contained in a flask through which they passed electric sparks to simulate lightning. Experiments like this yielded key organic compounds from the simple gas mixtures, in particular amino acids and nucleic acid bases, both essential components of living organisms. This was the ‘primordial soup’ from which life may eventually have arisen.

So, what about the rest of the universe? Is there evidence for life-associated molecules from elsewhere than Earth? The answer to that question is Yes. Meteorites known as carbonaceous chondrites are ancient remnants from the formation of the solar system. They have been found to contain complex organic compounds including amino acids and nucleic acid bases formed by non-living processes. This suggests that such materials, indispensable for the development of life, could be common in the universe.

Earth’s orbit around the sun falls within a ‘habitable zone’ (‘Goldilocks’ zone) meaning it is neither too hot nor too cold, such that water exists largely in its liquid state.  Do similar rocky planets or their moons outwith such a habitable zone have local conditions which might just favour life? The answer is probably again Yes. For example, gravitational tidal heating may play a role as in two of the moons of Jupiter, Io and Europa. Europa has a surface composed of fractured water ice, indicative of flow over a liquid interior. Perhaps this is liquid water. Titan, Saturn’s largest moon, has a dense atmosphere consisting mainly of nitrogen and methane, with smaller amounts of other organic compounds and it is not inconceivable that volcanic warming might provide locales on the surface for the development of more complex organic matter.

What about other planets occupying a habitable zone? In the Solar System there is Mars, concerning which there is much evidence that liquid water existed on the surface in the distant past, despite the fact that it is an arid, freezing world today. It may be that billions of years ago similar processes occurred on Mars as on Earth, and that simple life forms may have developed. They may yet survive – think of the extremophiles on Earth. Current work by the various rovers exploring the Martian surface may well shed light on this issue.

In the rest of our galaxy (Milky Way galaxy) various estimates in the 1960s using the ‘Drake equation’ suggested that there might be anything from 1 (us) to several million technologically advanced civilisations in our galaxy. Since then, this range has been narrowed, now 1 to several thousand. As yet, with a couple of false alarms, no communication has been achieved. The false alarms were unexpected regularities in radio signals detected by radio telescopes: LGM (for ‘little green men’) discovered by Jocelyn Bell in the 1960s, which turned out to be signals from rapidly spinning stars called ‘pulsars’; and the ‘Wow’ radio signal of 1977 which was never repeated, which turns out to have been due to a natural event.

In the last two decades the number of planets known outside the Solar System (exoplanets) has grown to 4528. A further 7798 remain to be confirmed, together with 3360 solar systems. Only two of these planets have been directly imaged, most of the remainder by measurement of regular slight dips in the intensity of light received from their parent stars. Although, because of their size, the majority of the exoplanets discovered  are gas giants similar to Jupiter, Saturn, Uranus and Neptune, a number of smaller rocky planets have been confirmed. How might we find evidence for life on any of them?

The main approach is to use spectroscopic analysis of starlight passing through the planetary atmosphere to try and detect biosignatures – ie. the presence of molecules produced by living processes. A key such signature could be oxygen, although there are other candidates. Much as we may be able to infer the presence of life in this way, and we have not yet done so, sending probes or ourselves to check it out on the ground is not an option. Distances from Earth are many light years, immensely too far for current transport technology.

So, where are we? Our galaxy is likely to contain countless planets, many of which might be like our own in falling within the habitable zone of a parent star. Our galaxy is one of trillions in the universe, each of which may contain countless planets. Therefore the number of planetary incubators for life is potentially huge. For this reason I would be utterly amazed if life has not originated many, many times in our galaxy alone, let alone everywhere else. As to the development of self-conscious intelligent life or, further, technological civilisations, that is another question.

Is there anyone out there?  Maybe.

Peter R

 

How James Clerk Maxwell Created the Modern World                  27 September 2021

A capacity audience (18) heard Professor Peter Grant, one of the James Clerk Maxwell Foundation trustees, speak on this great 19th C Scottish Scientist. Born at 14 India Street in Edinburgh’s New Town, Maxwell moved with his family to Glenlair in Kirkcudbrightshire where he spent his formative years and to which he returned from time to time throughout his life. India Street is now home to the James Clerk Maxwell Museum, administered by the JCM Trust. Maxwell completed his schooling at Edinburgh Academy and went on to the University of Edinburgh where his mathematical abilities were more fully developed, then moved to complete his undergraduate studies at the University of Cambridge.

Maxwell’s professional career began at Marischal College, Aberdeen where he was appointed Professor of Natural Philosophy at the young age of 25. He moved to take the chair at King’s College London in 1860 for a 5 year period, probably the most productive time of his career where he developed his ideas on electromagnetism. In 1865 he resigned and took time out for some years working on a number of other projects based at Glenlair. In 1871 he returned to Cambridge, where he had been a Fellow of Trinity College prior to his appointment to Marischal College. This time he was appointed as Cavendish Professor of Physics and was in charge of the construction of the Cavendish Laboratory. The latter is world famed and has been central to fundamental research in physics up to the present day.

Maxwell’s studies of lines of force in electric and magnetic fields, founded on work done by Faraday, led him eventually to the realisation that electricity and magnetism were intimately related, and lines of force propagated through space as electromagnetic waves. He devised his famous equations to describe this phenomenon and deduced that light must be a form of electromagnetic radiation. The Maxwell equations provided the first great unification of fundamental forces in science, and allowed Maxwell to predict the existence of radio waves (discovered later in the 19th C). His work also laid the foundations of modern electrical engineering, pointed the way for Einstein to develop the theory of special relativity, and provided the foundations for other physicists to develop quantum theory and quantum mechanics. Maxwell’s contributions to physics give him a stature equal to that of Newton or Einstein.

Sadly, Maxwell died in 1879 at the young age of 48. He is buried with his parents and his wife at Parton Kirk near Castle Douglas in Galloway.

Peter R

 

 

 

2020/2021

After a pause from March 2020 during Covid 19 Lockdown, meetings recommenced in October 2020 on Zoom, and a full programme was completed to May 2021

                                                                         2019/2020

Clock Changes and Road Accidents                 24 Feb 2020

Mike Mayer took us through the complexities of statistical modelling the effect of clock changes on the frequency of road accidents. Currently we change our clocks twice each year – in October from British Summer Time (BST)  to Greenwich Mean Time (GMT) and, in March, from GMT to BST. The latter, as we all know, is GMT + 1 hour (GMT + 1). There was a trial period from 1968 to 1971 in which BST (called British Standard Time for that period) was used throughout the year. Studies of road accident statistics for the period showed that there were significant reductions in all parts of the UK, in particular those involving pedestrians. Nevertheless, parliament voted to revert to the GMT/BST system in 1971. Since then, morning working practice as in farming, milk delivery, postal delivery  for example, have changed radically and a proposal was made for a 3 year trial of ‘Single Double Summer Time’ (SDST) which consists of GMT + 1 in winter and GMT + 2 in summer. The effect would be to extend the daylight period in the later afternoons and evenings – in response to the peak accident rates which occur at this time of day. So far, however, parliament has voted against the introduction of SDST.

A major contribution to discussions has been the data from mathematically modelling the effects of clock changes on accident rates, in particular considering the differences involving morning and afternoon/evening accident rates. The method has been to create a model with the imaginary initial state of 24 hours daylight, and then to tweak the parameters to compare the effects of changing the amount of daylight in line with various clock arrangements in the year on this base line of predicted road accident rates. Essential to this work has been a comprehensive database of road traffic accidents, and the resulting injuries, held by the Department for Transport, which has been used both to help create the model and to check it. The focus of the modelling work has been pedestrians – the largest proportion of injuries and fatalities. The model can be used to predict gain/losses  by comparison with the 24 hr daylight baseline if BST were applied throughout the year (the situation in 1968/71) and the predictions compared with the real data from the 1968/71 trial period. The agreement between the model and the real figures is very good, confirming its value as a statistical tool. Two approaches have been used to introduce the twighlight periods: one  to split the day into three zones: dark, twighlight and full daylight, and the other to represent twighlight as a more realistic gradual change.  Either way, the results are similar.

It is interesting that the models gave numbers which were in good agreement with the data from 1968/71, which showed a reduction of accidents throughout the UK. When applied to the SDST conditions, again the model predicts significant reductions in accidents by comparison with the current situation. However, Mike emphasised that statistical models like this could not easily incorporate a whole range of factors which arise in the real world (such things as weather variations), meaning that predictions must include a significant degree of uncertainty. Whether this is enough to account for any reluctance to introduce a SDST trial is moot. ‘Politically sensitive’ might be a phrase to include in the mix here.

Peter R

 

Sun in a Box         27 Jan 2020

The destructive power of nuclear weapons is well known. The energy released from a few kilograms of reactive material is sufficient to destroy a city in an instant. Nuclear fission (as in the original atomic bombs) has been harnessed to generate electricity in nuclear power stations like Torness. However nuclear fusion, a source of the much greater amounts of energy released by  hydrogen bombs, has not, despite more than 60 years of intensive research and development. Nuclear fusion is the process by which the Sun creates the light and heat essential to our survival, and seems to be sustained indefinitely at a steady rate. The central problem in harnessing fusion power on Earth is containing and controlling such a reaction within a suitable housing – hence the title chosen by our speaker

By means of a graph, Professor Ackland, showed that a very large amount of energy is released when hydrogen atoms combine (fuse) to form helium. The fusion process involves the collision of the atomic nuclei, a process which will only occur if the electrostatic repulsion of the positively charged nuclei can be overcome. (Think of pushing together the two like poles of magnets together until they touch, and multiply the energy you expend overcoming their natural repulsion by millions of times!) On very close approach very powerful nuclear forces overcome the repulsion and bind the nuclei together. In the sun gravitational forces help by squeezing the nuclei close together to allow fusion to occur. On earth, controlled fusion has been achieved by greatly increasing the speed with which the nuclei move around – ie. creating temperatures of millions of degrees (much hotter than the sun) –  so that they approach each other with sufficient energy to overcome the electrostatic repulsion. At such high temperatures the electrons are stripped away from the nuclei and the mixture of nuclei and free electrons is known as a plasma. However, major engineering issues need to be overcome to make the process commercially viable.

In the first place, to create the temperatures sufficient to initiate fusion requires a large input of energy. Currently it has proved difficult to even achieve ‘break-even’ – energy output = energy input, very far from the surplus energy required to operate a viable power station. Secondly, a material at a temperature of millions of degrees will instantly vaporise any solid container. So how can it be contained? The answer is by powerful magnetic fields created by large electromagnets. In the reactor, the magnets are arranged around the walls of a doughnut-shaped container known as a Tokamak (after the original Russian design). The magnetic fields must form an enclosure with no gaps, so that the hot plasma cannot come into contact with the Tokamak walls. Then there is the question of sustaining fusion over an extended period of time – weeks, months, as opposed to a few hours. Given the relative success of small Tokamaks, the final issue concerns scaling up to ‘power station’ size and then to see if it works – specifically if fusion can be sustained over extended periods and substantially more than break-even output can be achieved. This is being explored through a huge multinational operation to build and test a fusion reactor in southern France – the ITER project.

If it is ever to be possible, nuclear fusion power generation is still decades in the future. However, when the benefits are considered the effort is worthwhile: as much energy from a few kilograms of ‘fuel’ as from around 10,000 tonnes of coal or equivalent in oil or gas; no carbon dioxide production; no noxious nitrogen oxides or sulphur dioxide; low levels of nuclear waste in comparison with current nuclear reactors.

The main fuels for a reactor are the hydrogen isotopes deuterium and tritium. The former is readily available from water in essentially unlimited quantities, but the latter is a radioactive isotope which must be manufactured continuously as fusion takes place. The reaction can be used to make tritium from lithium, but this would just recycle any energy produced. By including beryllium, this problem is overcome. Thus it is not quite true that nuclear fusion can achieve cheap energy from an unlimited resource (seawater), since lithium and beryllium would be consumed by the process and represent finite resources. Nor is it entirely free of nuclear waste, although the waste is much smaller in quantity and less hazardous and long lived than the highly radioactive waste accumulating from our present fission reactors.

When all of these problems have been resolved, at the end of the day the fusion reactor is just another heat source which can be used to create steam to power steam turbines for electricity generation. This is no different from conventional and nuclear power stations today.

In  summary – very much a work in progress.

Peter R

 

A Beginner’s Guide to Quantum Physics    25 November

In a  presentation by one of our own, Laura Mole, we revisited the mysterious world (our world!) of quantum physics. To quote Laura: “The quantum world is weird and bizarre; however the theory behind it is very important to modern science and technology. But can it be so hard to understand? Is reality so strange, and how can a cat be alive and dead at the same time?

We began with classical mechanics, in which Isaac Newton’s Laws of Motion and gravitation are central to predictions about the motion of large chunks of matter – planets, cricket balls, rockets, vehicles etc. These predictions are intuitive and make sense because they relate to our world as we experience it, ie. on a large scale. Classical mechanics describes the macro world with great accuracy. Not only does it describe the behaviour of matter on this scale, it also does so for waves. Waves represent disturbances travelling in regular patterns through matter, most easily seen in liquids. We are all familiar with waves in the sea and with the waves (ripples) created when a stone is dropped into a still pond. The waves move through the medium (water in this case) with dips and troughs at regular intervals. However the water itself simply rises up and down: it does not travel with the waves. A wave is really a pulse of energy travelling through the medium. Less obviously, sound and light are also types of wave. Waves have peaks and troughs, and the distance from one peak to the next one (or one trough to the next trough) is the wavelength. The height of wave plus trough is the amplitude. The number of peaks or troughs which pass a given point is called the frequency – measured as cycles per second (Hertz). If a series of identical wave trains travel towards each other they will cross paths and continue on their way. However, where they cross, at intervals  peaks of one wave train will will coincide with troughs of the other, and the wave pattern flattens – the waves cancel each other out. This is called destructive interference. An application of this is seen in sound-cancelling earphones where undesirable external noise is eliminated by the production of internal sounds of the same wavelength, but phase shifted to cause destructive interference. Where peaks and troughs coincide there is an doubling of amplitude, constructive interference. Both forms of interference occur when light of the same wavelength passes through two closely placed slits in a cover and is projected on to a screen. A pattern of bright (constructive) and dark (destructive) bands (interference pattern) is seen. For this to work well, the slits need to be extremely small and relatively close together: of the order of fractions of a millimetre. This ‘double slit’ experiment was first performed in the 19th century and demonstrated clearly for the first time that light could behave as a wave form.

It became clear early in 20th century that classical mechanics could not accurately account for the behaviour of individual atoms and sub-atomic particles (protons, neutrons, electrons are the best known of a whole ‘zoo’ of particles), and electromagnetic radiation (light, for example). Max Planck introduced the term ‘quantum’ to mean a package of energy of fixed value, and further suggested that electromagnetic radiation (includes light) was propagated as a stream of quanta. Subsequent theoretical and practical work began to reveal very strange, counter-intuitive properties associated with this atomic/subatomic scale. This led to the Solvay conference of 1927 which was attended by most of the main players in the world of physics at the time among whom were Planck, Einstein, de Broglie, Bohr, Heisenberg, Marie Curie (the only woman), Schrödinger and many others. A substantial number already had, or would go on to win, Nobel Prizes for their contributions to quantum physics. Fundamentally it had become clear that on the very small scale entities could have both particle and wave properties (wave/particle duality). De Broglie derived an equation which related wavelength and mass which is  interesting in that it is completely general: large objects like ourselves have a wavelength, but so tiny as to be negligible. (If you say two people are on the same wavelength, then you are physically correct provided they weigh the same!). An even greater quantum strangeness was the finding that it was impossible to determine accurately both the position and the momentum (velocity) of a particle – you could measure one but not the other at any given moment – the essence of what became known as the Heisenberg Uncertainty Principle. Schrödinger derived an equation which encapsulated many of the ideas in quantum theory, shown here for decoration only!

What became very clear (!) was the ‘fuzziness’ of nature on the quantum scale – certainty was replaced by probability. The positions of particles, like electrons for example, could be described only in terms of probabilities. Thus if there was an 80% probability that an electron would be in a particular location then all you could say was that at any point in time the electron is more likely to be at this location. But it would be wrong to say it could not be elsewhere. This was the basis of Einstein’s difficulty in accepting quantum mechanics as the last word in physics:”God does not play dice”. Another pair of difficult-to-swallow ideas were that a particle could theoretically be in two different states at the same time, and that two particles separated by any distance could exchange information instantaneously (if one were altered, the other would be altered simultaneously in the same way) – “spooky action at a distance” or quantum entanglement – which again gave Einstein a headache. Not to mention you and me.

The Copenhagen interpretation of quantum theory involves using mathematical techniques to describe what is happening to a particle without observing or measuring it. It is convenient for the purposes of calculations and predictions to consider a particle as existing in two states simultaneously. That’s fine, it works:  in the quantum mechanics underpinning electronics and electronic devices; in respect of design and function of so-called quantum computers which are likely to form the next generation of computing power; in aspects of laser technology; in the development of novel materials; and so on.

An thus we come to Schrödinger and his thought experiment involving the infamous cat. The cat is considered to be in a sealed box for one hour where, because of the internal random mechanisms associated with delivering a lethal dose of poison, it has a 50% chance of being alive, or dead, after the hour. Until the box is opened it is impossible to predict which outcome pertains. This relates to a quantum physics interpretation in which the cat would be considered as both alive and dead at the same time within the closed box (cf. a particle existing in two states simultaneously), but that the outcome is only determinable at the time of measurement (when the box is opened in the case of the cat). Clearly an absurdity! Schrödinger was demonstrating that the Copenhagen interpretation, valuable as it was on the small scale, made no sense at all in the context of classical physics.

Finally, I’ll round off with a quantum version of the double slit experiment I mentioned at the end of the first paragraph. Single photons (particles of light) can be projected one at a time from a laser towards a screen with very narrow, very closely spaced slits. On passing through the slits, photons strike a detector which is similar to the image-capturing electronic system used in digital cameras , but vey much more sensitive. A photon is ‘fired’ from the laser only after the previous one has been registered by the detector. Thus photons arrive at the detector one at a time. Other versions of this set-up can project and detect electrons or atoms. The results are the same. You can imagine that if a particle had a 50% chance of passing through either slit then, for a large number of particles travelling one at a time, about half would go through one slit and half through the other. The detector should show two impact areas corresponding to the two slits. In fact it does not. It reveals an interference pattern of bright and dark bands! The particles have behaved as waves. But, strangely, each must have interfered with itself, because only one particle flies through at a time. The beginning of an explanation is that somehow the particle, in its wave form, passes through both slits. That’s all very well, but if a detector is introduced to determine which slit(s) the particles fly through the interference pattern is lost and only two impact areas are found!! It gets even more complicated here, but I have to give up now and recommend to anyone whose brain is not yet fried to do some Google searches on the topic.

Peter R

Human Evolution    28 October 2019

This talk was designed to introduce the group to the fascinating question of human origins. It was delivered by your very own me, and I accept full responsibility for any unintended errors or misrepresentations which may have found their way into the delivery.

Classified as Great Apes, grouped with Chimpanzees, Bonobos (Pygmy Chimpanzees), Gorillas and Orangutans, we (Homo sapiens) belong to a subgroup known as the Hominini (hominins). Our closest living relatives are  the chimpanzees, confirmed by DNA analysis which shows that we share over 95% of our genetic makeup with them. Humans and chimps shared a common ancestor from which the two lines of descent diverged. ‘Molecular clock’ analysis suggests this occurred 6-7 million years ago. (Simply put, a comparison of selected segments of DNA from humans and chimps show differences in base sequence, the extent of which is approximately proportional to the time since the two groups diverged. This allows a calculation of elapsed time to be estimated). Evidence for our evolutionary past comes from fossilised remains, and from the presence of stone tools, cut marks on animal bones, fossilised footprints, cave art and so forth. However, with some notable exceptions, the fossil evidence can be very fragmentary and current collections represent only a few hundred individuals spanning the immensity of time since we diverged from the chimpanzee. Any reasonable calculation suggests that the number of individuals of all species of hominin which have lived over the 6-7 million years would be at least in the hundreds of millions. Thus, statistically, our sample of a few hundred is a miniscule percentage of the total. Also, new discoveries will continue to be made. For both these reasons we have to consider the present model for human evolution, while persuasive, is very much provisional.

Until around 2 million years ago all evidence so far confines hominins to Africa and most fossil finds have been made in east and south Africa. The earliest hominin remains date from between 7 and about 5 million years ago. In order from oldest to most recent they are Sahelanthropus, Orrorin and Ardipithicus. All show evidence of bipedalism – they walked upright on 2 legs. Their teeth are smaller than the apes from which they evolved, and their faces seem to have been less prognathic (having projecting jaws, like the chimp for example). Whether any one of these groups could be ancestral to humans is not known, but animals rather like them could have been. At least they demonstrate two characteristics which evolved early in our development: bipedalism and less ape-like dentition.

From just over 4 million years ago we have evidence for a new group whose ancestry to humans is on much firmer ground. The extensive fossil material belongs to the genus Australopithecus. There seems to have been around five species at different times until about 2 million years ago before, we assume, the group became extinct. Many finds have been made in east and south Africa, but probably the most famous is ‘Lucy’ found in the Afar region of Ethiopia. Much of her skeleton was preserved where she died and it has been possible to do a complete reconstruction with the aid of other more fragmentary fossil material of her species. She was quite short (under 5 ft), with arms much longer than her trunk and hands with curved digits. These adaptations suggest she spent significant time moving around in trees. However, her pelvis, femur and feet indicate that she was well adapted to bipedal locomotion. This accords with her likely habitat of open woodland and savannah. Fossil footprints of a family group walking over volcanic ash confirms they were accomplished bipeds. Tool use has been inferred from cut marks on animal bones from about 3.3 million years ago.

Lucy’s species (Australopithecus afarensis) is the likely source of a population which evolved into the first archaic human – Homo habilis. Fossils show further ‘humanisation’ of the face, more refined adaptations for bipedalism, and an increased cranial capacity from about 500cm3 in its predecessor to about 700cm3 (cf. Homo sapiens average of about 1350 cm3). Importantly, habilis is associated with a simple stone tool tradition based on fractured pebbles. Around 2 million years ago a new hominin appears in the fossil record: Homo erectus. This hominin evolved from Homo habilis, and the two species overlapped for a significant period of time. Erectus was well adapted to the open savannah: tall (1.7 metres), long legged with a long stride, skeleton other than the skull much like our own, and a further increase in cranial capacity (800 cm3 in earlier examples, up to 1100 cm3 in the later variants). Much was learned from an amazingly complete skeleton of a juvenile – ‘Turkana/Nariokotome Boy’ – about 1.6 million years old. It is highly likely that this species hunted in small groups, an endurance/persistence method similar to that used historically by African Bushmen. This involves non-stop pursuit of faster prey by a mixture of running and walking until the prey animal is slowed or stopped by overheating. Erectus were probably fairly hairless with sweat glands to facilitate efficient cooling, and probably had dark skins to protect against the high UV levels they would experience. Hence in skeleton and body, other than the head, they were very much like us. They are associated with a much more sophisticated stone tool tradition than their predecessors – shaped stone hand axes made from flints and other suitable material. Homo erectus is the first hominin, so far as is known, to migrate out of Africa and spread widely throughout Eurasia. This happened soon after they evolved since remains found in the Caucasus date to 1.8 million years. To date Homo erectus is the most successful hominin, surviving as a species for about 1.75 million years – the most recent remains dating to about 250,000 years ago. It overlapped in time, if not necessarily geographically, with all subsequent hominins including us.

In Africa erectus gradually evolved into a key hominin, Homo heidelbergensis,  which is considered to have provided the common ancestor to ourselves and other species. This species was robustly built, around 1.8 metres tall, with a face which looked much more like ours. The cranial capacity had increased to around 1250 cm3. Again this hominin migrated out of Africa into Europe and Asia. The earliest example dates to about 600,000 years ago, but it may have existed from an earlier time. The most recent is from around 120,000 years ago. It is proposed that the European population evolved into the neanderthals, Homo neanderthalensis, the African population into Homo sapiens, and the Asian population into a species closely related to the neanderthals, as yet without an agreed Latin name – the Denisovans. The last only became known to science around 2010 following the study of fragmentary remains found in the Denisova cave in Siberia. Archaic forms of Homo sapiens were present in Africa by around 300,000 years ago. While there were earlier migrations by our species, the last significant migration out of Africa by Homo sapiens occurred about 70,000 years ago – a relatively small group from which all modern non-African people are descended. This conclusion follows from DNA studies which show that all non-African people are genetically very similar, while African peoples are genetically much more diverse and do not have the same DNA markers as the rest of the human population today. Homo sapiens spread relatively quickly to populate Eurasia, Australia and, 13,000 to 15,000 years ago, reached North America by migration into Alaska from north eastern Siberia.

In the last 10-15 years DNA analysis of modern populations and of fossil bones has revolutionised our understanding of our past. It has provided evidence that all modern humans retain mitochondrial DNA derived from one individual female who lived in east Africa about 150,000 years ago. It has shown that our species mated with neanderthals and with denisovans: all humans of European origin contain about 2-5% neanderthal DNA,  and all humans of Asian, Polynesian, Melanesian and Australasian origin contain a similar percentage of denisovan DNA. Similarly, the Denisova cave fossils yielded data proving that denisovans and neanderthals interbred. Remarkably, one individual was proven to be a first generation hybrid girl with one X chromosome from a neanderthal mother and the second X chromosome from a denisovan father. While DNA studies will continue to shed light on our past, they are limited to relatively recent remains (100,000 years old or little more) which have been preserved under conditions which slow down the natural degradation of DNA.

A number of fossil discoveries remain to be fitted in to the story of human evolution – Homo floresiensis, Homo naledi, Homo antecessor and Homo luzonensis. It remains possible that one or more of these hominins could require a major re-think of the current model. New finds will undoubtedly occur and will contribute to this dynamic process of refining the pathways and branches in the record of our origins and descent.

If you want to find out more about the species mentioned in this report then enter the name in a Google search. There is a lot of information out there.

As an afterthought, hominin remains, and other traces, not of Homo sapiens have been found in England and Wales: heidelbergensis bones from Boxgrove in Sussex (500,000 years old); neanderthal remains from Swanscombe, Kent (400,000 years) and from Pontnewydd, North Wales (225,000 years). Most intriguing are the footprints dated to 850,000 years ago on the beach at Happisborough, Norfolk, which could be early Homo heidelbergensis, late Homo erectus, or perhaps another hominin like Homo antecessor. No fossil bones have been found, but the area is known for the presence of stone tools like those used by heidelbergensis and possibly by the other hominins. The earliest example of Homo sapiens in Britain is ‘Cheddar Man’ who died about 9000 years ago. DNA analysis showed that he had a very dark skin and blue eyes!

 

Peter R

 

Inspiring Scottish Scientific Women    23 September 2019

This was a talk of different character from the usual. Its focus was more on the scientist than the science, in particular the contributions made by women to a discipline long dominated by men and, indeed, once considered an inappropriate profession for the female gender. Catherine Booth entertained and informed us using the examples of a few of the Scottish women who made significant contributions to their chosen fields. Here is a brief summary.

  1. Elizabeth Fulhame (18th century) conducted and described many experiments to try to make cloths of gold, silver and other metals. In the course of her investigations she established methods for the extraction of metals from their compounds other than by smelting, and presented evidence which undermined the widely held ‘phlogiston’ theory of combustion. She was among the first to investigate catalysis and to investigate the role of light in chemical reactions. Elizabeth’s experiments were always carefully designed, meticulously recorded, and reproducible by other scientists. Her approach followed the work of other, now better known (and male) scientists, including Lavoisier and Priestley. Rightly praised by her peers, to whom she was known as the “ingenious Mrs Fulhame”, Elizabeth made a major contribution to the science of chemistry. Her 1794 book, An Essay On Combustion with a View to a New Art of Dying and Painting, wherein the Phlogistic and Antiphlogistic Hypotheses are Proved Erroneous was a widely read and respected publication.
  2. Williamina Fleming (1857-1911), astronomer born in Dundee.  She emigrated with her husband to the USA, but was then abandoned by him when pregnant. Following a job as maid in the home of  the Director of  Harvard Observatory, Williamina was employed as a ‘computer’ in the Observatory itself. Her role was to quantify and interpret stellar spectra from photographic plates, which required precise attention to detail and the ability to spot anything unusual. She established a new system for star classification, discovered 10 novae (stars which brighten dramatically as they explode, and then fade), catalogued over 200 variable stars (brightness varies over time) and discovered many gaseous nebulae including the Horsehead Nebula in Orion (in 1888). Williamina was the first of the famous women astronomers for whom the Harvard Observatory became renowned.
  3. Doris Mackinnon (1883-1956) was a zoologist from Aberdeen who specialised in parasitic protozoa (for example the Amoeba responsible amoebic dysentery, among many others). She became the first female Professor at King’s College, London and prepared many of the illustrations for the influential book by D’Arcy Wentworth Thompson, ‘On Growth and Form’ (1917). In addition to her laboratory expertise and her administrative flair, Doris was a skilled lecturer and an excellent science communicator. The last was well demonstrated in her regular radio broadcasts for schools, and her public lectures.
  4. Eleanor Pairman (1896-1973), mathematician from Lasswade. Eleanor graduated in mathematics from the University of Edinburgh, gaining first class honours in Mathematics and Natural Philosophy, and was much praised for her considerable accomplishments.  Further study in the USA  led to a PhD, and at this time she met her future husband. Among other things, she prepared computational tables for ‘computers’ (ie. human ones, not machines) to shortcut complex calculations. Her husband taught in a male only college where it was not possible for her, as a woman, to be employed. Eleanor found a niche teaching mathematics to blind students, using her sewing machine to replicate geometric diagrams and mathematical symbols which could be felt like Braille.
  5. Nora Miller (1888-1994), marine biologist born near Stirling, spent most of her working life at the University of Glasgow. She specialised in lungfish. Nora also did comparative studies of fossilised remains with modern species of marine creatures and birds, and identified a mysterious, tiny fossil with a big name (Palaeospondylus gunni), found in a quarry in Caithness, as the larva of a Devonian lungfish. She took early underwater photographs in 1930s, using what we would consider (very) primitive diving equipment. Nora was a gifted and much admired lecturer, and became a Fellow of the Royal Society of Edinburgh – very rare for women at the time.
  6. Elaine Bullard (1915-2011) was a botanist. Although born in Greenwich, she spent almost all her working life in Orkney, from 1946, and so may be considered an honorary Scot. She had no formal botanical qualifications, although Heriot Watt University did award her an honorary degree later in life.  Elaine became a renowned expert in the botany of Orkney, Caithness and Sutherland, wrote many papers, and acted as consultant to academics and other visitors, including interested amateurs. With scant regard for the weather, Elaine did much of her research tramping and travelling around the islands, in a distinctive Reliant Robin van with tent attachment. She was Official Recorder for Orkney for the Botanical Society of the British Isles for 46 years, relinquishing the role at the age of 93. Repeated surveys over many years allowed Elaine to report on gradual changes to the botanical makeup of the islands arising over an extended period of time.

Peter R

2018/2019

Members’ Afternoon   24 June

We had a full and sometimes quite lively afternoon dealing with really interesting questions submitted by some of the members. Let me apologise in advance if I inadvertently misrepresent any views not my own!

First, Terry Page, assisted by contributions from Ulrich Loening and Peter Ward, addressed the question of tidal power. This concerned engineering; using tidal races (as in the Pentland Firth) and other approaches to harvest tidal movements; efficiency and output; effects on the environment; costs.

Then I attempted to give some explanation of the expansion of the universe, emphasising that it was observed on a cosmological scale but counteracted by gravity in a more ‘local’ context. Distant galaxies recede from an observer at velocities which increase with distance – calculated from redshift data (seen in their light spectra). In the last three decades it has been shown that the expansion is accelerating, rather than slowing down as once was thought, the extra ‘push’ ascribed to an unknown form of energy – ‘dark energy’. The apparent movement of the galaxies is due to the expansion of space itself, not their own intrinsic velocity. The rate of expansion is approximately 70 km per second per megaparsec (3.26 million light years).

Next, I dealt with a question concerning how new species arise, aided by Ulrich who explained the difficulty of defining species. The basic idea is that species formation occurs at the population level. If two populations of a species become isolated from each other and cannot interbreed, with time each population accumulates its own particular sets of gene mutations. The composition of the total gene compliment (gene pool) in the populations diverges as natural selection takes its course. Eventually the differences are such that two new species arise from the ancestral population. Hence it is not necessary to invoke two individuals who have ‘changed’ (and Adam and an Eve, say) as founders of a new species.

Gavin Marshall then explained the potentially catastrophic effects of a giant coronal mass ejection by the sun, should it strike Earth. The fast moving electrons and protons (a plasma) and the associated magnetic fields could potentially cause serious, if not catastrophic, disruption to electrical and electronic systems on the Earth’s surface, and to orbiting satellites. Contributions from the audience concerned satellites, now in orbit, designed to monitor the activity of the sun (solar weather) and give advanced warning of coronal ejections. It is of interest that the effects of large ejections striking the earth would be of little consequence were it not for the electrical and electronic systems upon which the world now relies. The last recorded incident was in 1859, and had little effect other than to cause power surges in the embryonic electric telegraph system of the time.

With various contributions from the audience I did my best to explain a little about ‘dark matter’. That it exists in the universe is inferred from observations of the shapes and movements of galaxies and the extent to which light is ‘bent’ around distant galaxies ( producing an effect called gravitational lensing). Gravity is a property of matter and increases with increasing mass. The mass of visible matter in galaxies is insufficient to account for the gravitational effects observed hence the need to invoke additional and invisible mass. Over all there needs to be an average of 5 times as much dark matter as visible matter in most galaxies. For the universe overall, matter consists of about 15% visible, and 85% dark ! As yet dark matter is ‘seen’ only by virtue of its gravitational effects. It does not interact with electromagnetic radiation (light, etc) in any way, thus its invisibility. It does not consist of the usual particles like protons and electrons, but may consist of quite different elementary particles which have been theorised but never detected – WIMPS (Weak Interacting Massive Particles) for example. I hear you say “you are having a laugh, mate”. But there you have it.

Tony Reeves the gave an explanation of electricity as produced by a battery (DC) or generated by a power station (AC). Electrons will only move along a wire if they can be supplied at one end and removed at the other, for example coming from the negative terminal of a battery and returning via the positive terminal. This is a circuit, and as electrons flow along a circuit they can do ‘work’, for example heat up an element to boil a kettle, or drive the electric motor of a lawn mower. Because the electrons are confined to a circuit you cannot end up with a pile of electrons in your house after using your appliances.

Finally, Terry, with  contributions from others, was able to explain how it is that we can still see light which has been travelling for millions of years through space, even although the light intensity reduces by a factor of 4 when distance from the source is doubled (an example of an inverse square law). The immense output by a star sends light waves/streams of photons of huge intensity in all directions. We can still see objects millions of light years away because there are still enough photons to register on our retinas, or on the photographic systems of telescopes.

Peter R

New approaches to infectious diseases in an era of antibiotic resistance.         27 May 2019

Dr Donald Davidson is a Senior Research Fellow at the Centre for Inflammation Research (CIR), University of Edinburgh. He introduced us to some  of the fine detail of the chemical and cellular function of our innate immune system, and the application of an entirely new therapeutic approach to human infections. Innate immunity is the body’s ability to mount a non-specific defence against bacteria or viruses, based on chemical agents and the responses of specialised defence cells such as neutrophils found in the blood and tissue fluids. Neutrophils are the most abundant class of white blood cells and they with two other cell types – macrophages and dendritic cells – work by engulfing and digesting invading bacteria or viruses and hoovering up the remains of infected tissue cells which have died. Innate immunity should be distinguished from adaptive immunity, the form with which most of us are probably more familiar. The latter involves mechanisms in the immune system which respond to specific ‘invaders’ (eg. a particular bacterium, virus or foreign tissue) by inducing the formation of antibodies (specialised proteins which specifically attack only that invader) and/or activating specialised cells (T cells) which do likewise.

At the heart of Dr Davidson’s research are small proteins known as host defence peptides (HDP). These molecules are antimicrobial: they either directly kill bacteria or viruses or stimulate aspects of the inflammatory response by the body to facilitate their destruction by other means. Generally, we tend to think of inflammation in terms of redness (increased blood flow), swelling, pain and heat – our everyday experience of this event on the body’s surface. But these changes at a wound or infection site simply reflect the outward signs of a complex chemical and cellular response: the increased blood flow etc. facilitating the delivery of these protective agents to the site concerned. A similar inflammatory response occurs internally.

HDPs have a role in this process. Some of the research on antibiotic resistant lung infection (caused by the bacterium Pseudomonas aeruginosa) has shed light on a mechanism. An HDP is produced naturally in the lining (epithelium) of the airways. Babies and people over 65 are more susceptible to this infection and recover more slowly (sometimes not at all), and this correlates with the quantity of HDP produced by the epithelial cells: low in babies, increasing to a maximum in adulthood and declining in old age. The specific peptide involved in this situation belongs to a class of HDPs called cathelicidins. The only one in humans is called LL-37. It is quite clearly important in the defence of the lungs against infection. LL-37 floods into infected cells and stimulates a number of intracellular responses: packaging and destruction of some of the bacteria, release of signalling molecules which enter the bloodstream and stimulate the inflammatory response, self destruction. Neutrophils then arrive in large numbers and produce yet more LL-37, multiplying the effect and engulfing bacteria and cellular debris, the response declining as the infection  is cleared. Low levels of LL-37 increase response times and may result in failure to clear the infection. A similar pattern of susceptibility and response is seen in a common lung infection by respiratory syncytial virus (RSV). It is also the case that LL-37 is protective against influenza. More widely, it has been shown that in a disorder called morbus Kostmann, where neutrophils are deficient in cathelicidin, there is a greatly increased susceptibility to infections. Also, genetically engineered mice deficient in the murine equivalent of LL-37 have increased susceptibility to infections in the lung, skin, intestinal tract, cornea and urinary tract. In humans LL-37 is found in airway surface liquid, plasma, sweat and other body fluids. It is upregulated (produced in greater amounts) in infection and inflammation as seen in increased concentration in these body fluids.

HDPs are directly microbicidal – they kill the target cell or virus. However, simply injecting large doses of HDPs is fraught with danger: trials with mice have revealed unpredictable and severe side effects from high dose regimens. Therefore it is probably more practical to use lower concentrations of HDPs as necessary to stimulate the innate response. This is a much more subtle approach than with antibiotics, and it would completely circumvent the problem of antibiotic resistance in bacteria. In addition, it would avoid the damaging effects of antibiotics on the many bacteria we harbour which are essential for our health – our microbiome.

LL-37 is the only cathelicidin produced by humans, but we produce other types of HDPs and their role is also the subject of much research. These could contribute significantly more to the effectiveness of this kind of treatment.

Although much work remains to be done to prove the effectiveness of this therapeutic approach, it offers one strand of hope for a future where we may have to face up to the problem of a wide range of antibiotic resistant pathogenic bacteria. This enhancement of our natural defences might be one way to avoid a return to death rates from infections not seen since before the advent of antibiotics.

Peter R

 

Is Biochemistry Boring?  29 April 2019

The idea for this talk originated with University Challenge. In the 2016 final, a particularly bright young man on the winning team (Powell, Peterhouse Cambridge) seemed able to correctly answer questions on almost any topic – except biochemistry. When teased about this by Jeremy Paxman at the end of the show, the student’s response was ‘biochemistry is boring’, with the inference that it was too tedious for him to learn much about it. Dr Mike Billett, a biochemist, took this on as a challenge to prove otherwise – hence today’s talk.

Mike built his argument around the workhorses of the cell: proteins. A cell produces many thousands of different protein molecules, each with its own particular role. They determine the properties of cells – controlling and coordinating the myriad chemical chemical reactions within, regulating cell division, responding to external signals, sending out chemical signals to other cells, regulating the expression of genes, controlling cell shape and movements, and many more. Different cell types express different suites of proteins, determined by the the particular suites of genes active in that cell type, in turn regulated by specific proteins.

Protein function is intimately connected to molecular shape. Every protein consists of a chain of sub-units called amino acids – a polypeptide chain. There are around 20 different amino acids, and the number of each type and the sequence in which they are linked in the chain is characteristic of each type of protein. Polypeptides can be less than 100 amino acids in length as in small proteins like insulin, up to many thousands in larger protein molecules. Chemical interactions between the different chemical ‘side groups’ of the amino acids cause the polypeptide to fold spontaneously, adopting the characteristic shape of the functional protein. In most proteins this is a complex 3 dimensional shape referred to loosely as ‘globular’. The shape is exquisitely related to the function of the protein.

Mike developed the idea of shape and function with a look at a specific class of proteins – enzymes. An enzyme is a catalyst which accelerates a chemical reaction involving a specific substance (substrate). Essentially this happens because the enzyme carries a groove or cavity in its surface (‘active site’) which makes a very exact fit with the substrate. Chemical bonds between the enzyme and substrate stimulate the chemical change which converts the substrate to product. In this way, put simply, one enzyme will recognise only one substrate molecule – think of a key and lock to get the idea.

A consequence of this shape recognition is that substances with molecular shapes similar to the substrate can compete for the active site. Such substances are known as competitive inhibitors. In the economy of the cell, the competition between substrate and natural inhibitors is one means of regulating the supply of products. The administration of artificial inhibitors of this sort provides a way in which the function of an enzyme can be compromised, sometimes with lethal effects. Examples include antibiotics which specifically inhibit enzymes found only in bacteria – ‘magic bullets’ which kill bacteria, without affecting the cells in your body. In the treatment of  many other human conditions enzyme inhibitors have a major part to play. Mike first chose the example of statins, medication with which many of us are familiar.

Statins are competitive inhibitors of an enzyme in liver cells which is involved in the production of cholesterol. The first effect is to lower the internal production of cholesterol by liver cells and hence their secretion of low density lipoproteins (LDL) containing cholesterol [‘bad’ cholesterol] into the bloodstream. Sensing the decline in intracellular cholesterol, the cells produce more LDL receptor proteins which are embedded in the cell surface. Their shape is complementary to LDL and this allows them to bind to LDL in the blood and therefore enhance the uptake of LDL with its cholesterol load into the liver cells. This ensures that the liver cells maintain adequate levels of cholesterol for essential purposes and reduces cholesterol levels in blood. The latter protects against atherosclerosis which can lead to serious cardiovascular disease (heart attack, stroke). For the more nerdy among us, the enzyme is hydroxymethylglutaryl CoA reductase (HMG-CoA reductase), and statin molecules have a molecular shape very similar to the natural substrate hydoxymethylglutaryl CoA (HMG – CoA). The enzyme begins the chain of reactions (metabolic pathway) leading to the production  of cholesterol.

As another example of the importance of shape to protein activity, Mike addressed the treatment of a type of cancer known as B-cell lymphoma. B cells are part of our immune system and produce antibodies, specialised proteins which help us to fight disease. The standard chemotherapy treatment for this and other non-Hodgkins lymphomas is known as CHOP. This is a combination of four chemical inhibitors C, H,O and P which affect different aspects of cell division. Two of these (C and H) bind to DNA and interfere with replication (essential before a cell can divide). O  binds to proteins essential for for the mechanical process of division, and P binds and blocks specific proteins known as cytoplasmic receptors which are necessary for the stimulation of DNA synthesis. Unfortunately, the four chemotherapeutic agents also affect normal dividing cells throughout the body, giving rise to sometimes gruelling side-effects. Not ‘magic bullets’, therefore, but an addition to CHOP therapy approaches this ideal. The additional agent is a monoclonal antibody, which will bind only to a receptor protein on the surface of B cells –  healthy B cells as well as cancerous ones – and triggers apoptosis (cell death). The antibody is a genetically engineered protein known as Rituximab, the shape of which is complementary to CD20, a cell surface protein found only on B cells. Thus more widespread effects on the body are avoided. However Rituximab is not sufficiently effective on its own and is therefore used in combination with the four chemical agents in a therapeutic regime known as R-CHOP.

Thus, by focussing on proteins and, more particularly, the importance of molecular shape to function, Mike revealed to us something of the exquisite complexity of biochemistry. Molecular shape recognition lies at the heart of all biochemical processes and, as you have seen from these few examples, our understanding of this is essential in the treatment of human disease. Mike has only scratched the surface of biochemistry for our benefit but I hope he will have convinced most of us that it is a topic which is both complex and fascinating – very far from boring.  Pax Oscar Powell!

Peter R

 

 

Quantum Wonderland     25 March 2019

 

Consider a cat in a box fitted with a randomly activated lethal device. The external observer (you?), and the cat have no control over the activation of the mechanism. Until the box is opened the cat’s state (alive or dead) will be unknown. In a sense, before the box is opened the cat can be considered as both alive and dead at the same time – each state has an equal probability of being true. This, of course, is nonsense in the world we perceive – the cat will be one or the other, not both. However in the world of the very small – atomic and subatomic scales – the coexistence (superposition) of two states (eg. anticlockwise or clockwise spin of an electron) is a concept at the heart of quantum mechanics. The cat in the box is a version of the famous ‘Shrödinger’s Cat’ thought experiment designed to illuminate some of the paradoxes of the then (1920s) developing quantum theory. The quantum world is a very strange one, and very difficult to relate to our everyday experience. Professor Andersson gave an illustration of this based on experiments where electrons are fired at a pair of closely spaced, very narrow slits to strike a fluorescent screen on the other side. We tend to think of electrons as very tiny particles, but one version of this experiment, using a beam of electrons, produces a series of alternating lighter and darker bands, or stripes, on the screen. This is an interference pattern typical of wave forms. Exactly the same kind of pattern is well known from experiments with light. However, if the electrons are fired through one at a time each impact on the fluorescent screen shows as a bright point, as if struck by a discrete particle. As the experiment progresses the bright spots, at first appearing to have random distribution, eventually show clustering into bands of high intensity alternating with bands of low intensity – an interference pattern as from a continuous beam. Thus electrons, and much else in the quantum world, show wave-particle duality – one example of the superposition of states.

An electron appears to pass through one slit or the other, not both as a simple wave would do. It is possible to set up the experiment with detectors to identify which slit a given electron passes through. Now things get really weird! With the detectors in operation, the interference pattern is not obtained: the electrons no longer show wave character.  This story goes on but it becomes ever more mind bending.

It is said that the proof of the pudding is in the eating and this can be applied to quantum mechanics. One might have difficulty in getting one’s head round the physics, given how counter-intuitive it can be, but the fact remains that quantum mechanics underpins the manufacture and operation of a large range of gadgets and instruments we are very familiar with, and some with which we are less familiar. Think of our  everyday electronic devices (phones, digital TV and radio, computers), MRI scanners, PET scanners, GPS devices, including satnav, ultra accurate time keeping based on ‘atomic’ clocks (without which navigation systems would be insufficiently accurate), quantum computers, quantum cryptography (unbreakable codes), advanced light microscopy, the LIGO system to measure gravitational waves, and many more! This is the quantum wonderland we live in: the realisation of the underlying quantum wonderland explored by science.

Peter R

 

 

Recent Advances in Diabetes     25 February 2019

 

Dr Nicola Zammitt, Clinical Director – Edinburgh Centre for Endocrinology and Diabetes, generated a great deal of interest among the group with her talk on the treatments for Type 2 diabetes. This is the form of diabetes which is giving our health service greatest cause for concern, given its high and increasing frequency in the population. It is closely associated with being over weight. As our society experiences a steady increase in the proportion of overweight and obese people, the frequency of Type 2 diabetes, with all its cost implications for the NHS, is also increasing. Within the diabetic population, about 85% have Type 2.

At the centre of the story is the hormone, insulin, which is produced by specialised cells in the pancreas (? cells). Its role is to stimulate the absorption of excess glucose from the blood stream into the liver or muscles where it is converted to the energy store, glycogen. Failure to produce insulin at all results inType 1 diabetes where, before treatment, blood glucose rises to very high values. Treatment in this case is by regular insulin injections.

The underlying causes of Type 2 diabetes relate to either insulin resistance, where the muscle and liver cells respond less efficiently to insulin, or to reduced insulin production. In both cases the result is a persistently elevated blood glucose level – twice or more than accepted normal values – which in the longer term leads to serious complications. The latter include damage to the retina and possible blindness, kidney failure, damage to blood vessels leading to cardiovascular disease and increased frequency of heart attacks or strokes, and peripheral neurological problems (loss of sensation, chronic pain). It is well understood that reducing blood glucose levels as far as possible towards normal can greatly reduce the risks of these complications. This can be achieved in a number of ways: diet or surgery to achieve weight loss, administration of insulin to boost low production or overcome resistance, administration of other pharmaceuticals to stimulate insulin production or increase the sensitivity of cells to insulin, administration of a drug to block the glucose reabsorption channels in the kidney such that large amounts of glucose can be expelled in the urine.

A preliminary clinical study using a small group with Type 2 diabetes involved a dietary approach (Cambridge liquid diet of 800 Cals per day) to achieve weight loss and in particular to reduce the quantity of fat around the abdominal organs. Blood glucose concentrations were measured throughout the study and MRI scans of the abdomen were used to track the reduction of abdominal fat. After around 8 weeks the results were striking. Firstly, fasting blood concentrations dropped back to normal within a week and remained so, insulin levels returned to normal and, for most participants, the end result was significant weight loss and remission of their diabetes for at least the period of the study. This led to a larger scale clinical trial funded by Diabetes UK centred on the Universities of Glasgow and Newcastle, in which GP surgeries were asked to recruit from their patients. Each surgery would be assigned either the diet trial or to continue with existing standard best weight loss management (controls). This was introduced in 2013 and is on-going. The study acronym is DiRECT (Diabetes REmission Clinical Trial) which has received the largest grant support ever awarded by Diabetes UK. The initial results were striking, sufficiently so that, although the trial is not yet complete, NHS Scotland and NHS England have decided to role out this low calorie weight management approach widely. The DiRECT results to date had shown that nearly 50% of participants on the experimental regime were in remission after a year, and the proportion in remission was related to the degree of weight loss. Thus the subgroup who had lost 15 kg or more showed over 80%  were in remission at this point. By comparison, only 4% of the control group were in remission. Further study should determine how long remission will last after continuing to a normal diet. It seems highly possible that, for most people, if reduced weight is maintained, or further reduced, remission could be sustained indefinitely. In this way, for most patients the hazards associated with weight loss surgery can be avoided, and considerable savings would be made in reducing the dependency on various antiglycaemic drugs (drugs which, by one mechanism or another, can reduce blood glucose levels).

From this very important advance in treatment of Type 2 diabetes, Dr Zammitt took some time to explain about some unusual forms of diabetes. We think of Type 1 (insulin dependent) and Type 2 diabetes as the only ones. However, measurements of a proxy for insulin production called    C  peptide (a fragment removed from the insulin precursor, proinsulin, to produce active insulin) have revealed some surprising results. As an example, a patient who had been diagnosed Type 1 at the age of 8, who had been on insulin for 27 years, had huge difficulty for most of this time in controlling her blood glucose levels. This was despite very determined efforts. The result was that she had begun to show a number of complications – retinal damage in particular. Measurements of C peptide showed that she was actually producing normal amounts of her own insulin! Thus drug treatment was instated to alleviate the insulin resistance causing her condition and, after 27 years, she was able to stop injecting insulin and sustain normal blood glucose levels. Her particular form of diabetes is due to a single gene mutation and follows a simple, predictable, inheritance pattern. This is completely unlike Type 1 and Type 2 where many genes may be involved and inheritance patterns are complex.

Peter R

. Geological Time    28 January 2019

David McAdam, geologist and leader of the U3A Geology Group, reviewed the fascinating history of the development of the concept of geological (‘deep’) time. In the classical period such a concept was meaningless – the earth was considered to be eternal and unchanging. Since early modern times when Archbishop Ussher calculated the date of formation of the earth from biblical sources as 22 October 4004 BC at 6pm, various attempts were made using a more scientific approach. These yielded immensely greater ages, but none remotely as immense as the true value established by radiometric methods in the 20th Century. Physical calculations in the 18th and 19th centuries were based on theoretical rates of cooling of the earth, estimates of rates of sedimentation, or estimates of time required to achieve the current salinity of the oceans, which came up with ages ranging from 20 million to 200 million years. However, the cooling folk knew nothing of heat production by radioactive decay, and the others inadvertently incorporated what were found later to be huge errors. With the eventual discovery of radioactive decay where a radioactive isotope (a radioactive atom of a particular element) declines in abundance as the atoms emit radiation, it became possible to calculate a reasonably accurate age for the earth. Each radioactive isotope decays at a fixed rate characteristic of that isotope, defined as the half-life. Thus, for a particular isotope, the half-life is the time taken for any given quantity to decay to half of its initial value. The final products of a decay process are stable (non-radioactive) isotopes of different elements. Half-lives vary from tiny fractions of a second up to billions of years depending on the isotope, and that of greatest significance for determining the age of the earth is Uranium 238, with a half life of about 4.5 billion years. The end product of Uranium 238 decay is Lead 206. This Lead isotope is always radiogenic (produced as a radioactive decay product) but the Lead isotope of greatest abundance, Lead 204, is not. Much oversimplified, the ratio of radiogenic Lead to Lead 204 in a rock sample provides a measure of the radiometric age of the rock. The oldest rocks exposed on the earth’s surface are around 4.4 billion years old, and rocks from meteorite samples come up with ages slightly older. Hence as near as we can estimate, the earth is around 4.56 billion years old.

However, we didn’t need to know that to develop the science of geology. Much sterling work was done in the 18th and 19th centuries to establish the main principles. These were based on the ideas that sedimentary rocks formed by gradual deposition on the sea bed of sands, muds, and the shells of tiny marine organisms which, over time under pressure, became rocks like sandstones, mudstones and limestones. Similarly river and lake sedimentation also contributed. It was deduced that movements of the earth’s crust (tectonic movements) folded and fractured these rocks, pushing them up into mountains and other land forms which would then become subject to erosion, wearing down over time, the products contributing to new sediments. The immensity of time required for these processes was first properly and famously appreciated by James Hutton, one of the great names of the Scottish enlightenment in the 18th century. Hutton identified several sites where there was a discontinuity – technically an ‘unconformity’ – in sedimentary rock sequences. The most famous of these is at Siccar Point on the Berwickshire coast, just east of Peas Bay. Hutton went in by boat and saw almost horizontal beds of red sandstone overlying near vertical truncated folds of grey sandstone (greywacke). He was aware that the time taken to erode the grey rock to the extent visible would have required an immensity of time, after which the new sandstones were deposited on top. We now know that the eroded greywacke surface is around 400 million years old.

From then onwards the science of geology burgeoned. Hutton published ‘Theory of the Earth’ in 1795, which set out the basis of our modern undertanding . Following his lead, in the 19th century the mapping of the different rock types exposed on the surface and identified through mining, canal and railway developments, with the concomitant identification of key fossils, led to the first geological maps and the subdivision of the geological record into distinct periods. These periods: Cambrian, Ordovician, Silurian, Devonian, Carboniferous etc., were radiometrically dated in the 20th century. Thus Hutton is rightly feted as the founder of  modern geology, and Siccar Point, around which his ideas crystallised, has for this reason become a world-renowned geological site.

Peter R

 

Air Quality … vehicles, agriculture and wood stoves         26 November  2018 

Before turning to the issues associated with the pollution sources named in the title, Professor Fowler gave us the essential back-story, from the realisation in the 1600s that air pollution in London was unpleasant and unhealthy, through the accumulating evidence for increased ill health and death rates from the later 19th century onwards. As long ago as 1661 John Evelyn, in his pamphlet ‘Fumifugium’ addressed to Charles II, recognised the effect of the London ‘fogs’ on health and well-being. This foul smelling mixture of coal fire smoke arising from domestic and industrial sources, suspended in fogs, added to in 20th century by vehicle emissions, was a characteristic of many cities, but most notoriously associated with London in the 19th century. The London ‘pea-souper’ figures greatly in the works of Dickens, Conan-Doyle and other writers, and is seen in the art works of Turner and Monet. The term ‘smog’ (hybrid of smoke and fog) was coined in the late 19th or early 20th centuries. The unpleasant smell was due to the very high content of sulphur compounds derived from coal. In 1952, the December smog in London caused such a marked increase in the death rate of vulnerable groups (elderly, young children, people with respiratory illness, and so forth) that legislation was enacted in the form of the Clean Air Act which came into full force in 1956. This began the slow reduction in pollution from the products of combustion – creation of smokeless zones, building new power stations in the country and closing those like Battersea Power Station which were located in the middle of cities. The problem of acid rain was recognised in the 1970s and resulted in further  steps to decrease the use of high sulphur fuels. As a result, the levels of sulphur compounds in the atmosphere of the UK  have been vastly reduced, and to very low levels, in particular over the period 1970 – 1990.

While efforts to reduce pollutant gases like suphur dioxide and nitrogen oxides (NOx) continue to have success in the UK and Europe, an American study from the 1990s highlighted the importance of suspended particulate matter as a health issue in atmospheric pollution. It has since been established that PM2.5 (tiny particles of average diameter 2.5 micrometres – about the size of a bacterial cell) is a major contributor to 30,000 deaths per year in the UK, and, statistically, a reduction of 36 months of life expectancy in heavily industrialised regions like the Ruhr in Germany. Such particles vary in their chemical makeup: inorganic matter including carbon (soot) particles, metals and other materials from vehicle brakes; organic matter from vehicle tyres, cooking emissions and various domestic, agricultural and industrial sources. Thus particulate pollution is a mixture of components and, very importantly, there is no scientific consensus on the relative toxicity of the different components. This means it is not possible as yet to focus control efforts on those which cause serious harm. Weather patterns and proximity to sources play an important role in the level of exposure people experience. Pollution often falls sharply with distance from source, and particulates are carried further and spread more widely downwind of a source.

And so to vehicles! Much has been done to reduce emissions of pollutant gases, including unburned fuel components, through the use of catalytic converters. Filter systems in diesel vehicles reduce particulate emissions. In addition, the creation of ‘low emission zones’ and congestion charges in cities has had some impact. As yet, little if anything has been done to control the release of particulates from brake systems. This means, of course, that ‘all electric’ vehicles would still be a source of pollution.

Wood stoves are seen as providing a renewable and efficient use of fuel. However, it is arguable if burning wood is truly ‘renewable’ in the context of timescales for growing replacement timber. Also, in areas where wood stoves have increased in popularity there has been a concomitant increase in atmospheric particulates. These can be moderated to some extent by ensuring that the fuel is dry.

Agricultural contributions to particulates in the atmosphere arise principally from ammonia (from fertilisers and livestock). Oxidation results in the formation of nitrates and other products which eventually are deposited. With regulation there has been a steady decline in the ammonia concentration in the atmosphere in the UK, but the rate of deposition of oxidation products has changed hardly at all over the last 10 years. It is likely that because of the removal of much of the sulphur pollution since the Clean Air Act, and thus of a large proportion of oxidisable gases, the oxidising power of the atmosphere is enhanced in relation to ammonia and NOx. The more rapid oxidation of these nitrogen compounds means that deposition occurs closer to source, before weather patterns have a chance to blow these gases to Europe and Scandinavia as once was the case.

After some comments on the potential (or lack of it) for increased tree planting to absorb particulates, many questions and a lively discussion ensued!

Peter R

Pebble Story: a tale of sediment from source to sink                      22 October 2018

Dr Mikael Attal gave us an engaging and very informative talk on a subject which, on the face of it, looks very mundane. Mikael’s talk, however, clearly held the interest of his audience. He began with Siccar Point and the layers of conglomerate one can observe there. The Siccar conglomerate formed around 350 million years ago when pebbles derived from rivers eroding a nearby mountain range were enveloped in the sandy sediments which, in time, formed Old Red Sandstone. From Siccar Point, to Kerrera, to Arran, Mikael illustrated features of sedimentary rocks in Scotland, and then moved a few thousand miles to Nepal where he has been involved in an extensive study of rivers emerging from the mountain boundary on to the ‘platform’ of the Indian tectonic plate. This plate collided with the Asian plate about 40 million years ago, and continues its drive north. The consequence is the  ‘crinkling up’ of the rocks to the north, forming the Himalaya, and the creation of a relatively abrupt boundary between the mountains and the sedimentary basin (Indian Plate) to the south.

Pebbles are the rounded, smooth, derivatives of rough, angular, rock fragments generally entering rivers from hilly or mountainous valley sides. The pebbles can be thought of as an intermediate stage in the conversion of rock to sand. The force responsible for this ‘grinding down’ is the flow of water in  the river valley which causes rock fragments to grind against each other and rocky banks and river beds. The greater the flow rate and volume, the larger the rock fragments which can be carried along. By their nature, the upper reaches of rivers arising in mountains flow rapidly and therefore can transport larger fragments and break them down to smaller pieces relatively quickly. The water flow is such that little permanent settling out (sedimentation) takes place at this stage. Periodic and sometimes catastrophic increases in loading occur as the result of landslips (caused by earthquake or sustained heavy rainfall, as in the Monsoon). This rapidly raises the height of the river bed, by as much as 100 metres in  the Himalayan valleys, and can result in the rapid change of a river course (avulsion) with drastic consequences for human settlements and agriculture. Notably, although it is one of the world’s poorest countries, Nepal has developed a highly sophisticated and very reliable system to give advance warning of potential landslips and floods, something that its rather richer neighbour, India, has failed to do.

When the rivers cross into the sedimentary basin, the velocity slows and sedimentation becomes dominant. First the remaining pebbles sediment out as gravel, the lighter sand particles sedimenting more slowly until, with lower river velocities further downstream, the sand is deposited. The gravel to sand sedimentary boundary is readily identified in these Himalayan water courses. Deposition necessarily causes an increase in height of the river bed above the level of the surrounding land. As a consequence, the river overflows to form new channels on either side and, gradually, with repetition, a sedimentary fan develops (cf. delta where a river enters the sea).

Understanding the dynamics of all of these processes is of great importance for the protection of human life and livelihood (see Nepal example), and for predicting the consequences of the construction of dams, and the artificial channelling of river courses.

The humble pebble, therefore, is one of the players at the heart of the physical processes of erosion and  river course development in the evolution of landscapes, and, in turn, the impacts of these processes on human settlement and civil engineering projects. The composition of pebbles tells much about the nature of the parent rock of which mountains are built, which, in the case of conglomerate pebbles like those at Siccar Point, allows deductions to be made about the nature of mountains long since eroded away.

Peter R

 

The Extremely Large Telescope: Engineering Challenges and Science Prospects     24 September 2018

Today we enjoyed a detailed and fascinating talk by  Professor Colin Cunningham, formerly Programme Director of the UK contribution to this essentially European project, to build the largest ground-based astronomical telescope on Earth – imaginatively named the Extremely Large Telescope (ELT) to distinguish it from the ‘Very Large Telescope’ already in operation. The location of the instrument, presently under construction, is a mountain top in the Atacama Desert, northern Chile.The clarity of the desert atmosphere, the absence of artificial light and the availability of such a lofty platform, Cerro Armazones, makes the site ideal for the purpose.The scale of the construction required the removal of millions of tons of rock from the summit to produce a flat platform at more than 3000 metres above sea level. Ground works to lay the foundations are well under way, with the expected completion for first use of the ELT in 2024. As with most astronomical telescopes today, the light from objects under observation is collected by a primary mirror, and focussing to create an image is completed by other mirrors in sequence. The mirrors have concave surfaces –  a magnifying mirror as used in the home is a simple example of the type. However the ELT primary mirror is constructed from a mosaic of many small mirrors (mirror segments), which makes its very large diameter manageable, but which requires very careful engineering. An essential component of the telescope is adaptive optics to allow correction for atmospheric turbulence (causes stars to ‘twinkle’) to allow the collection of clear, high resolution images at very high magnification. This requires a very complex system of computer control to continuously adjust the tilt angle of each mirror segment.

Now some statistics!

Cost of project                                                         about 1 billion Euros

Comparative Scale of building                            about 80 meters high and more than 40 metres wide ,  would dwarf St Mary’s Church

Diameter of primary mirror                                39 metres

Number of mirror segments                                798

Mass of Dome                                                         5000 tonnes

Mass of moving (tracking) framework              3000 tonnes

Light collecting ability                                           256 times greater than Hubble Space Telescope, producing images 16 times sharper

 

The ELT instrumentation is complex, but it is designed to detect both visible light and infra-red radiation. It will be capable of very high resolution, equivalent to making out a 1 pence piece at John O’Groats when viewed from Land’s End. Its power should therefore allow the routine capture of images of planets around other stars (exoplanets) and the analysis of their atmospheres, which is essential to establishing the possibility of life elsewhere in the universe. Despite the speed of light being 300,000 kilometres per second, it takes 4 or so years for light to reach us from the nearest star (4 light years away). Hence, when we look into space, we are also looking back in time. The ELT will be much engaged in deep-space observations, gathering light which was emitted up to 14 billion years ago (14 billion light years distant), in order to understand early galaxy and star- forming events only several hundred million years after the ‘big bang’. The ELT will generate vast quantities of new information about the universe including much that we may expect but, more excitingly, much that will surprise us!

Peter R

 

 

 


2017/2018

The Search for Life Beyond Earth       28 May 2018

Who better than a Professor of Astrobiology to take us on a marvellous journey from Earth, through the solar system and beyond to address the evidence supporting the existence of life elsewhere in the universe. The questions ‘Are we alone?’ or ‘Is the universe teaming with life?’ book-end our philosophical deliberations over many centuries. Professor Cockell began looking at what we now know by exposing us to astronomical numbers, referencing our home galaxy populated by billions of stars and the observable universe which is known to contain billions of galaxies. The discovery in recent decades, thanks mainly to the Kepler space telescope, of several thousand planets, a minority of roughly earth-size, associated with the relatively small fraction of visible stars it has surveyed in our home galaxy, begins to suggest, statistically at least, that life must have arisen elsewhere. The current view is that to harbour life planets must have orbits in the ‘Goldilocks Zone’ : not too close to the parent star (too hot) and not too far (too cold), but just right for water to exist on the surface in liquid form. Such conditions could support life of some form. Statistically, again, robust calculations based on existing evidence suggest that the number of such habitable earth-like planets is of the order of 8.8 billion in our galaxy!

Water is the key. What about the solar system? There is now good evidence that Mars had oceans of liquid water which covered large areas of its surface several billion years ago. High resolution photography from orbiters, and surveys from Mars Rovers, in recent years have indicated the presence of sediments laid down on the edges of the seas, river channels and deltas. Cooling of the planet and the loss of atmosphere due to ablation by the solar wind meant that surface water evaporated away into space, and much of the remainder froze and now exists beneath the surface. However, the oceans and seas on Mars overlapped in time with the development of life on Earth. Thus, can we suggest the possibility that life arose on Mars as well? Certainly, if it did, even if it is now extinct, there is very likely to be evidence in the Martian sediments. It is important to understand that at this distance in time (3-4 billion years ago) life on Earth was microbial – essentially consisting of bacteria – and would certainly be the same on Mars. In fact in the 3.2 – 3.4 billion years or so life is thought to have existed on Earth, it was  around 2.6 – 2.8 billion years before the first animals and plants evolved. (The fossil record shows that simple animals and plants appeared about 600 million years ago). On top of that, modern humans have only been around for about 100 thousand years. Professor Cockell illustrated this in the context of a 24 hour clock which began with the origin of the earth and ended at the present day. On this scale, intelligent life (humans) has been present for less than one minute. Hence it is much more probable than not, that any extraterrestrial life we discover will be microbial in nature.

Other bodies in the solar system have water. The most important of these are Europa, a moon of Jupiter, and Enceladus, a moon of Saturn. Both have been closely examined by visiting space probes (Cassini in the case of Enceladus) and both have surfaces of ice. Beneath the ice there is evidence of immense and very deep oceans of salty water. Indeed, internal heating on Enceladus creates regular ejections through the icy crust of gaseous plumes into space. These have been spectroscopically analysed by Cassini, and shown to contain water, methane, carbon dioxide and several other compounds suggestive of conditions beneath the ice which might be favourable to life. With Mars, Europa and Enceledus we may well be close to finding out whether life could have arisen beyond Earth.

Further afield we could carry out spectroscopic analysis of the light passing through the atmospheres of  planets around distant stars, to look for chemical ‘signatures’ which could suggest the presence of life on their surfaces. The most telling of these would be ozone, a form of oxygen, because there is good evidence that the principle source of oxygen in earth’s atmosphere is photosynthesis, and so with any other planet ozone would be a fundamental signature of life.

The final section of the talk concerned a project in conjunction with the Scottish Prison Service, where prisoners, as part of an education programme, learn about some of the issues of planetary colonisation and who are then given the opportunity to design in detail living accommodation and food production facilities. The outcome of this has been a book “Life Beyond” which is a collection of these designs and ideas, and in this form the prisoners’ work has been circulated to professionals working in astrobiology.

The talk ended with a picture taken a few years ago by the Cassini probe. The view was from just beyond Saturn looking back towards the inner solar system and showed a tiny little blue dot in the blackness – the insignificant little Earth and its inhabitants, including you and I, hanging in the void. If we are truly alone in the universe then this picture emphasises how very alone and how insignificant we would be.

Peter R

Predictions in Physics    26 March 2018

Professor Galbraith’s  talk addressed the importance of predictions in the confirmation of hypotheses and the advancement of scientific knowledge. He first outlined how the prediction of the position of an eighth planet, calculated from perturbations in the orbit of Uranus, led to the discovery of Neptune in 1846 when a  telescope was pointed at the predicted position in the sky. The discovery verified the Newtonian description of planetary orbits.

Later in the 19th century, James Clerk Maxwell developed a mathematical description which united electricity and magnetism in the theory of electromagnetism. The theory explained a range of known electromagnetic phenomena, and described light as an electromagnetic wave form propagated through space at the velocity we know as the speed of light. It was possible to predict from this the existence of of other forms of electromagnetic radiation, including radio waves, which would propagate in the same way at the same velocity. The confirmation of these predictions followed later in the century.

In the early 1900s, Einstein developed his theory of special relativity from which various interesting predictions could be made, including that time passes more slowly in a frame of reference which is travelling faster than another. For example, relative to a person standing on the platform, time would pass more slowly for a passenger in a moving train. This prediction has since been proved by experiments which depend on the use of supremely accurate atomic clocks. Einstein’s theory of general relativity, developed later, includes the idea that space-time is curved and that the curvature is increased by massive objects like planets, stars and galaxies. From this it could be predicted that light rays should be forced to follow a curved path round a massive object and observers would therefore see the source of the light, eg. a star or a planet, displaced from its true position. An expedition in 1919 to measure the displacement of Mercury from its true position, by the gravitational effect of the sun on light reflected from the planet, was able to show that the prediction from general relativity was correct within a very small margin of error. The theory also predicts the existence of gravitational waves, which have finally been detected only in the last 3 years.

In the 1920s the mathematician Paul Dirac developed a mathematical description of the behaviour of particles on the atomic and subatomic scale. This was part of the development of the ‘mechanics of the very small’ – quantum mechanics. From the equations Dirac suggested the existence of antimatter, a form of matter where, for example, anti electrons (positrons) have positive charge instead of negative charge and anti protons have negative charge instead of positive charge. Antimatter was detected in the laboratory a few years later, so underpinning the ‘truth’ of Dirac’s equations.

Overall this was a fascinating talk of which I have only scratched the surface.

Peter R

 

The Higgs Boson: Past, Present and Future    26 February 2018

Professor Victoria Martin led us on a clear and entertaining  journey to explore the complexities of particle physics and very big laboratory equipment. Beginning with reference to CTR Wilson and his eponymous cloud chamber, the first apparatus using which the tracks of subatomic particles could be visualised, Victoria introduced the ‘zoo’ of fundamental particles which contribute to the make-up of atoms. To electrons, muons, tau, up and down quarks etc, she added the Higgs Boson, the particle derived from the Higgs Field. The latter is thought to pervaded all of space and is responsible for the property of mass. Without it, nothing would have mass, a rather interesting concept which I won’t play with! Professor Peter Higgs is a theoretical physicist based at Edinburgh University who in the 1960s both predicted the existence of such a particle and how it might be expected to behave. Around the same time other theoretical physicists arrived at similar conclusions independently. Following the confirmation of the discovery of the Higgs Boson at the Large Hadron Collider (LHC) in 2012, Peter Higgs and Francois Englert were jointly awarded the 2013 Nobel Prize in Physics.

Victoria’s research work is devoted to the Higgs Boson, meaning she spends much time with the LHC near Geneva. This vast underground particle accelerator is essentially a giant circuit 27 km in length, along which protons are guided and accelerated to unimaginable velocities – very close to the speed of light. Packages of millions of protons are introduced such that they travel in opposite directions through separate tubes for a number of circuits. When they reach the desired velocity (energy) they are deflected into a single tube where they collide head-on, their mutual destruction producing sprays of fundamental particles (see first photograph above), the paths of which are photographed inside a detector. Evidence for the Higgs particle was found using two such detectors. The ATLAS detector, for example, is a huge structure built around the collision point. That it is of a size which would fill up the full height and length of the main hall of the Scottish Museum in Chambers Street is thought provoking! Its task is to take 40 million photographs per second to capture in 3 dimensions the results of billions of proton collisions per second. Complications in the search for Higgs Boson are that it is only produced in about 1 in 1 billion collisions , and it decomposes almost instantaneously. Thus many collisions are necessary to reveal the particle and to confirm that the detector signals are truly those of the Higgs, and its presence can only be inferred from the products of its decay, which have long enough life-spans to be detected – the ‘shadow’ of the Higgs as it were.

So what now? Work with colliders such as the LHC continues with an number of objectives in mind. For example, is the detected Higgs Boson really the one which was predicted; will the existence and nature of ‘dark matter’ be proved; will ‘dark energy’ reveal its presence in these experiments; will a ‘new physics’ be uncovered ? To progress work of this kind much larger colliders are planned: a much larger circular one, or a 50 km linear collider near the CERN complex(Geneva); large circular colliders in China; a linear collider in Japan. It seems something of a paradox that huge scale projects of precision engineering like these are designed to help us towards a fundamental understanding of matter and  the universe at the smallest possible scale.

Peter R

Members’ Questions         22 January 2018

Members submitted a number of questions to be addressed by the assembly. Peter R chaired, and the group as a whole enjoyed an interesting afternoon of learning and discussion.

We had time to consider clingfilm and other plastics – how are they made from crude oil, and the role and potential health effects from additives (plasticisers) to PVC to produce clingfilm. We highlighted the strict regulations applied under EU Directives to ensure plasticisers never exceed a tiny minimum transfer to foods wrapped in clingfilm. We also considered the terms ‘degradable’, ‘biodegradable’ and ‘compostable’ in relation to plastic bags in particular, and how they did not provide the answer to the environmental problems of discarded plastic.

One of the group gave a good outline of the contribution of carbon dioxide produced by transportation to the total produced by human activities – 20 to 25% on a worldwide basis. Cars emit a major proportion of the carbon dioxide associated with transportation.  Hence the contribution made by car use to global warming is very significant. One car journey to London from Haddington would produce about 48 kg of carbon dioxide.  By comparison, one flight from Edinburgh to, say, Heathrow generates around 9 tonnes. However, in terms of total carbon dioxide release, road transport trumps air transport based on the much larger numbers of vehicles involved.

We turned then to a question which in a factual sense was very easy to answer, but which in a speculative or philosophical sense gave scope for a great deal of thought and discussion. The question: “Do multiverses exist and, if so, what is the evidence for them?”; the answers: respectively, “we don’t know” and “there isn’t any”. However, there was considerable discussion on where the idea comes from – solutions to mathematical equations based on quantum principles mainly, but also the philosophical idea that each time we make a decision to follow one course of action rather than others, the other possible ones somehow are followed in different realities (universes) parallel to our own.

Finally, we switched themes yet again to address a question  on evolution: “Do we have sources of direct evidence showing evolution in action on short timescales?” We discussed the meaning and application of natural selection, the problem that much evolution has taken place over immensely long time periods, and the fact that on shorter timescales, while natural selection can be seen in action, the dramatic step changes to form new species (speciation) are rare. We see selection for resistant bacteria out of populations exposed to antibiotics, and herbicide resistance arising in plant populations exposed to herbicides like ‘Roundup’. In but in these cases the change is at the sub-species level: resistant varieties of the same species are favoured. The same is true in the famous case of the peppered moth, often quoted as an example of rapid evolution. In the plant world, however, there are some clear examples of abrupt speciation, of which common ‘cord grass’ is a good example. It arose as a result of a hybridisation between a European cord grass (Spartina maritima) and an American one (Spartina alternifloris).  This would have been a sterile hybrid but for the chance duplication of chromosomes in one or more individuals, giving rise to what is now called common cord grass Spartina anglica. Its progenitors are less common because they are less well adapted to their environment than the hybrid.

Regrettably, we ran out of time to consider another biological question for discussion on the list.

Peter R

 

Under Pressure: Extreme Crystallography using the World’s Largest Lasers     27 November 2017

Prof Malcolm MacMahon

The talk described the use of advanced techniques in crystallography to investigate how the structural and physical properties of matter change under conditions of extreme pressure and temperature. After an introduction to the units of measurement for extremely small and extremely large quantities, and a chronology of the subject of x-ray crystallography from the pioneering work of the Braggs (pointing out the large number of Nobel Prizes awarded for work on crystallography since that time), Prof MacMahon went on to describe how techniques had developed to enable the study of matter under extreme pressure (such as in the core of planets or stars). In particular he described Edinburgh’s work on dynamic compression techniques, where very powerful lasers are utilised to compress samples over timescales of nanoseconds. They are starting to investigate the use of ultra-intense pulses of x-rays available from x-ray lasers at facilities such as the LCLS in Stanford, California, and XFEL in Schenefeld, near Hamburg, coupled with pulsed optical lasers, to achieve extreme pressure – temperature states in materials.

Mike Maher and Joan Bell

 

Where have all the Merlins gone?  A Lammermuirs Lament     25 September 2017

 

Merlin (male)
Merlin (female) at nest

In this first meeting of the new session we had a talk by Professor Andrew Barker in which he outlined the methods and conclusions from a study of Merlins in the Lammermuirs. The study began in 1984 and ended, abruptly, without warning in 2014, when Andrew and his two colleagues were denied vehicular access essential to the coverage of this large area. For the most part, until this happened, over the years the team had owed much to the support and cooperation of the land owners and gamekeepers, with complete freedom to use the many access tracks.

Andrew and his friends developed this study of Merlins in the Lammermuirs to add structure to their passionate interest in birds of prey, in this case focussed on the smallest of the falcons found in Scotland. The male Merlin is a little larger than a Blackbird, the female somewhat larger. The techniques employed were relatively simple in principle, but quite arduous to carry out: involving many hours walking in rough country, then many more hours observing for signs of nesting birds. Territories were identified and nest sites were located.  Close approaches were made to check for egg clutches and chicks, all with minimum disturbance and with every effort made to conceal tracks made in the heather after each visit. The team are trained bird handlers and are licensed to handle and ring the Merlin chicks.

Merlin nest in mature heather
Young chicks

 

Older chicks
Older chick developing adult plumage

 

 

 

 

 

 

 

Ringing

Andrew’s talk brought together the findings of the 30 year study of Merlin falcon population and breeding success in the Lammermuir Hills with the substantial changes in land-use and land management in these uplands. This appears to have had major consequences for the wildlife diversity on the hills, not the least for birds, the numbers of which have declined sharply over the last 15 years or so. The Merlin is a good case in point. In the first half of the study (1984 to about 2000) up to 12 or 13 nesting sites each season were not unusual, but on the last year of the study (2014) only 3 were found. Compounding the intensification of moorland management for grouse and the development of wind farms are the insidious effects of climate change. Our milder winters and warmer springs but colder,wetter summers make it more difficult for the Merlin to raise broods successfully.

Some of us have been walking in the Lammermuirs for many years and have seen the changes for ourselves, as well as some of their effects on the landscape. Clearly wind farm development is a very obvious change, with substantial areas now occupied by groups of wind turbines and their access tracks. Over the remainder, less obviously but arguably more detrimental to the landscape and ecology, has been the accelerating intensification of the driven-grouse ‘industry’, with the attendant increase in ‘muirburn’  (burning mature heather in patchworks to encourage fresh young growth as a food source for grouse), 4WD access tracks and other infrastructure to benefit the shooters.  Grouse population growth is promoted and supported to achieve numbers far beyond those typical of the closing decades of the 20th century.  The authors of the study think it highly likely that the Merlin will become extinct in the Lammermuirs, primarily due to habitat loss and lack of food. (For example, Meadow Pipits are typically on the menu for Merlins, but numbers of this species have declined sharply, probably for similar reasons to the Merlin itself).

Many individuals and groups agree that the degradation of upland moors like the Lammermuirs and the resultant loss of species diversity, both animal and plant, is biologically and aesthetically undesirable. Controls would require effective legislation which would allow the appropriate level of regulation. Unfortunately the vested interests which profit from current practice, not the least of whom are the land owners and their representative bodies,  are very influential, with the consequence that achieving necessary changes or additions to the law would be very difficult.

The Merlin is a good ‘type example’ of the effects of the ‘perfect storm’ created by the intensification of land management practices, coupled with climate change, on our upland bird populations.

Peter R