Wet Steam Measurement Techniques


 In recent years a greater need for power station efficiency has become evident; improving turbine blade efficiency is one of the methods proposed. This efficiency depends upon the wetness of the steam that comes into contact with the blades of the low-pressure turbine stage in general and all turbines in nuclear power generation. Therefore, measurement of the moisture content of the steam in real time in conjunction with an accurate measure of steam velocity can give an overall mass flow re-entering the turbine, allowing for a feedback control. The system can rely on one technique that can measure suspended droplets and wall-bound liquid film, or a combination of techniques can operate together. This work gives a comprehensive review of the different techniques used to measure the moisture content including the liquid film and moisture content and techniques that can give measurements on both simultaneously. Each technique has its strengths and weaknesses, and they were analysed to see which technique works best overall and which techniques can be used together.


Introduction
Maintaining as high steam cycle efficiency as possible is one of the most important tasks that a power station has to manage on a daily basis; the key to performing this is to be able to monitor the quality of the steam at key points along the cycle; where the steam exits the heat exchanger, just before the steam enters the high and low pressure turbines, and within the turbines especially near the last stage rotors and stators.. Dryer steam in nuclear power generation can improve the isentropic efficiency and reduce erosion on the turbine blades (Malhotra and Panda 2001). As the steam expands through the turbine or cools, it loses energy. This energy loss is accelerated as work is required to drive the turbine blades, hence reducing the steam to its saturation temperature, whereby condensing into droplets, which makes the steam wet (Malhotra and Panda 2001). This means that as the steam condenses and drops below the saturated steam state, water droplets will begin to form in the flow. As the process continues, a very thin film of liquid will develop on the inside of the pipes and turbine casings. By the time the steam has reached the low pressure turbine the flow will have condensed to a point where there is a significant amount of liquid to reduce efficiency ).
Saturated steam flowing through the cycle into a low pressure turbine has an annular flow structure, so it carries along droplets of liquid in the core flow as well as a thin film along the inside wall of the pipe. The majority of this liquid is removed before it enters the turbine using a moisture separator. However, as the steam passes through the rotors and stators it will inevitably gain more moisture, which will travel as droplets and collect as a film of liquid. This means the quality of the steam begins to deteriorate as soon as it passes through the first set of rotors and stators, which becomes evident in the third rotor stage, where heterogeneous nucleation begins. Droplets are produced as the saturated steam comes into contact with the surface of the rotors and stators. By the time the steam has reached the latter stages, homogeneous nucleation has begun to occur. This occurs when the steam has lost enough energy to allow water molecules to stay close enough together for surface tension to take effect, hence causing the formation of microscopic droplets, as well as reentrainment of droplets generated in earlier stages. A two part analysis of this process was conducted in depth (F. Martinez et al. 2011); the analysis looked at how and where the droplets are produced within the turbine, and the effects that are caused by the impact of droplets of rotors and stators on the turbine.
Applying these droplet behavioural characteristics to a low pressure turbine, a thin film of liquid is produced owing to the process of droplet nucleation; the film of liquid has a boundary layer within the gas phase core, where the interaction happens. One of the first papers to combine previous studies on waves and apply it to a gas moving over a body of liquid was that of Miles (1957), who summarised the key ideas of what happens when air moves over a body of liquid, from which he was able to calculate the Reynold's number for which different wave profiles are found and what gas flow rates were required. Progressing on from this, the work of Chien & Ibele, Ueda and Tanaka and Ueda and Nose (Chien and Ibele 1964) showed that a film of liquid was made up of two layers, a smooth flowing under layer with a rough top layer that had disturbance waves if the gas flow was high. From this (Han et. al. 2006), looked into how these disturbance waves affected the film, and what further effects the gas flow rate had on the waves. Fig. 1 shows a diagram of the overall structure of a film of liquid with a wavy layer. They used an air and water mixture under three different pressures, the highest being 5.8 bar. The paper found that the waves were responsible for the flow rate of the film, and that the roughness of the wavy layer is due the size of the gas flow rate. However the wavy layer of the film travels much faster than the base layer by a ratio of 1:14. The size of the wavy layer can change depending on the size of the waves caused by the gas flow rate; this means that for the experiment they found that 60% to 80% of the liquid is driven along at the higher velocity in the wavy layer. Fig. 1: The structure of the liquid film showing the wavy layer of the film given by 1 and the smooth layer of the film given by 2. (Han et al. 2006) The combination of droplet nucleation and a wavy liquid film can reduce the efficiency of a low pressure turbine due to blade erosion and corrosion occurring on the rotors and stators; as well as this, liquid present on the rotors changes the blade surface coming into contact with the steam, causing less work to be done on the blades (Nagpurwala). In order to aid in the prevention of this, a measurement device is required that can shed understanding on how the presence of droplets could give control over the effects seen within the low pressure turbine. This literature review therefore looks into techniques that give a cross-sectional void fraction measurement, as well as liquid film measurement, of the flow entering the turbine. Thus giving both measurements that are required and techniques that are also non-invasive so as to not disturb the steam flow. The review will also analyse methods in which droplets and the local and overall steam wetness is measured inside the turbine. This is to give an up to date in depth analysis and review of the different techniques, as well as seeing which ones could now be implemented into the design of a nuclear power station. The fundamentals of void fraction measurement techniques will first be described, followed by four categories, photon based, neutron based, capacitance and ultrasonic. From this the fundamentals of film thickness measurement will be described, followed by five measurement categories, photon based, subatomic particle based, capacitance, conductance and ultrasonic. Finally, droplet phase measurement techniques for use inside the turbines will be analysed in the following sections: photon attenuation and scattering; direct optical measurement; Electrostatic probes, and microwave probes.

Cross-Sectional Void Fraction Measurement Techniques
The main way that images are obtained of the structure of the multiphase flow inside a pipe is by using different tomography methods; this is where sensors placed around the pipe detect changes to an emitted physical signal as it passes through the different phases inside the flow. An algorithm such as the linear back projection method, takes this data and converts it into a set of images showing a 2D cross section of the phase density across the flow. Reviews of the different tomography techniques have been conducted before (Mohd et al. 2012) and where they could be implemented (Jamaludin et al. 2013). The following work builds upon this by looking into the latest advances in photon based (gamma and X-ray tomography), neutron tomography, neutron radiography, electrical capacitance tomography and ultrasonic tomography.

Photon-Based Radiography and Tomography
Two different types of energised photons are used for tomography: gamma rays and X rays; these techniques are an advancement of film measurements where the detectors measure the attenuation rate of the transmitted signal to build up a cross-section of the flow. Looking into gamma ray tomography, a comprehensive background and layout of how a system works was written in 2007 called Industrial Process Gamma Tomography (2008). This report details the progress of gamma tomography around the world from 2003 to 2007. Andre Bieberle and his team designed a sensor that could detect gamma rays with high resolution (Bieberle et al. 2013) as shown in Figure 2.  (Bieberle et al. 2013) They first tested its resolution capabilities by seeing how the gamma source was detected through a slit between two lead bricks; they also tested it by seeing how it could reproduce an image of an aluminium block. Once it was found that both these tests gave sufficient resolution, the detector was used to create an image of a cross-section of a suspension of beads in water by having the detector fixed to a rotating plate surrounding the pipe containing the flow; the plate could rotate at a speed of 0.025 rpm up to 10 rpm delivering a signal every 0.9 angular minutes (Bieberle et al. 2013). The results showed that the detector could reproduce images of any sort of flow of beads in water from stratified to annular.
Following this they looked into using gamma-ray tomography to measure bubbly flow inside a pipe as well as for inclined rotating fixed bed reactors. For the first experiment they used three different designs of gas into liquid sparger to give different bubbly flow distributions, then placed the sensor at different distances along the flow and recorded the images (André Bieberle et.al. 2013).
The latest advancement of this technique was performed by Roshani et al. (2015) who used a gamma densitometer. They used a Monte Carlo simulation of the densitometry measurement characteristics (i.e. photon scattering within the sample) to predict results from known void fractions and flow regimes such as annular, stratified and homogeneous. Although the liquid used for the tests was neither wet steam nor a water and air mixture, but for gasoil and air, the results can still give insight into how well the system works. Firstly, the team found that they were able to accurately predict the void fraction of annular, stratified and homogeneous regimes, to within an error of less than 1%. Figure 3 illustrates the comparison of experimental and simulation data obtained. The RD means the percentage difference between experimental and simulation data.  Compton edge count, the highest energy that can be deposited corresponding to a full back scatter by Compton scattering. (Roshani et al. 2015).
X-ray methods work around the same process, the only difference being the higher attenuation rate due to the lower energy photons being absorbed more easily. Axel Seeger and his team (Seeger et al. 2002) looked into using two different methods to visualise a multiphase flow; the first method being measurement of the three dimensional velocity field by X-ray based particle tracking velocimetry (PTV) where a single image was taken every 20 ms resulting in 25 image pairs per second, this could give up 1000 images made from 500 image pairs. The second method being liquid flow visualisation in a bubble column by injecting an X-ray absorbing liquid. The results from these experiments were promising: they were able to show how the velocity field changed.
X-ray radiography, tomography and stereography were explored by (Theodore et al. 2005); X ray radiography is where a 2D image is taken of a 3D object by recording the X-ray attenuation through the object; the team were looking into visualising a rubber ball falling through a water-filled column, then analysed how a bubble of gas rises through a column of water 32cm long, as well as taking velocity measurements (Theodore et al. 2005). They then moved onto X-ray tomography in which images were produced of an air and water bubbly column by rotating an X-ray source and detector around the target; the results of this experiment also gave velocity readings of 3, 10 and 18 cm/s (Theodore et al. 2005). Finally, X ray stereography was explored (Figure 4), highlighting how the three-dimensional position of a bubble can be determined in a column of water, amongst other characteristics such as object movement and changes in time. X-ray Radiography and Tomography were further analysed by Theodore, Terrence and Joseph, who took images of bubbly multiphase flow using X-ray radiography. For the experiment, they injected a mixture of 10% polyethylene glycol into a water flow where the viscosity was slowly increased; using radiography they were able to see individual bubbles when the flow was sufficiently slow (Theodore et al. 2007). X-ray tomography was explored by taking images of a water system flowing at 2cm/s, and the resolution provided reasonably accurate results showing bubble positions and the void fraction in the flow.
Bin Hu et al (Bin Hu et al.2011) used an X-ray tomography system to measure a multiphase flow for an air and water system; they set up the system with two emitters and detectors, where one was positioned above the flow and one to the side, producing a top down and a cross sectional view. The system was tested for stratified and slug flow with a sampling rate of 300 Hz, clearly showing how the liquid moved slower than the gas by 2 to 5 m/s; other results included different projection views which could show wave propagation and phase distribution. Following on from this they used an improved reconstruction algorithm to show cross-sectional slices of the flow as well as building them up into a 3D image.
Another form of X-ray tomography is known as electron beam X-ray tomography; one of the earliest examples of research in this area was carried out by Hampel et al. (2005), who used an electron source to hit a tungsten target which emitted X-rays across an area. The changes in these waves were then picked up by a semicircle detector.
An evaluation of the technique was conducted by Bieberle and Hampel the following year (Bieberle and Hampel 2006), whereby different algorithms and noise levels were tested using the data. They found that noise levels could be decreased to less than 2% for bubbly flow, where the bubbles were less than 5mm and had a good lateral resolution measuring down to 1mm. This research idea was further tested by Fischer and his team, who were able to make the beam move around the target, then take images from different angles (Fischer et al. 2008), (Fischer and Hampel 2010). They achieved a maximum frame rate of 7 kHz and accurately showed a picture of a flow, provided that it was moving less than 5ms -1 . The images were reconstructed using the standard linear back projection technique referred to in Tomographic Image Reconstruction (2015).
Bieberle investigated the possibilities of duel plane tomography to measure the velocity of the flow (Bieberle et al. 2010). He also compared electron beam tomography to a wire mesh sensor to see the differences in resolution and reconstruction of the flow image. What is immediately noticeable is the difference in resolution of the images: the non-invasive technique gives much clearer images, meaning that much smaller bubbles are visible. Stürzel and his team (Stürzel et al. 2011) furthered this idea by expanding to 3D electron beam tomography (i.e. a scanning X-ray beam which is stimulated by an electron beam), which meant objects could be scanned above 1KHz, but slower than using 2D electron beam tomography (Stürzel et al.); however it was not tested on a multiphase flow, and instead, circular phantom models were used for the tests; due to increasing the time frames the spatial resolution was reduced to 1.2 mm. Following on from this the technique was reviewed by Bieberle, Stürzel and Hampel where it was compared to 2D tomography and duel plane tomography (Bieberle et al. 2012); it was highlighted that the problem with 3D is that it can only be used on static volumes. The benefits of 2D duel plane is that it can be used on multiphase flows giving a resolution of less than 1 mm and can be used for accurate velocity calculations.
From that point on, specific parts of electron beam tomography were analysed to see how they could be improved upon. Halls evaluated the X-ray source used for 2D and 3D electron beam tomography, comparing Advanced Photon Source (APS) synchrotron -a narrowband X-ray source, to a broadband X-ray source (Halls et al. 2014). The results showed that APS was able to overcome several limitations of the broadband source, such as having sufficient attenuation, and was able to resolve special features down to 10 -12 µm. Bieberle et al. (2015) looked into the most time consuming part of the tomography process: the data processing of the CT images. They adapted the data processing and reconstruction algorithms to allow for use in parallel processing CPUs with six cores. This allowed an improvement where up to 137 slice images can be reconstructed per second. The most recent review of the area was produced by Barthel et al (2015), who looked into analysing the velocity measurement of the flow. They looked at all different techniques that can measure velocity from electron beam tomography. They looked at how velocity information can be extracted from the CT images, how a single bubble can have its velocity measured, this was expanded to measuring the velocity of particulates within a gas/liquid two phase flow. Finally the team looked into velocity measurement of a single phase flow using a contrast agent.
To conclude, from the available photon methods, APS would be very useful for wet steam measurement as droplets produced can have diameters as small as 10 µm from heterogeneous and homogeneous nucleation, so the technique would have the resolution to pick up presence of the liquid phase. Velocity measurement would be helpful as well because it would give an accurate assessment of the mass flow rate.
2.2 Neutron-Based Imaging Mishima et al. (1997) designed and built a sensor based upon previous work that used neutron radiography to produce images that were sufficiently clear to accurately measure the droplet flow regime. They firstly used the microscopic neutron cross-section scaling method which is a quantification method, a technique they developed in 1996; secondly, they recorded images over a long period of time up to 20 minutes using 250, 500 or 1000 frames per second. Finally as before with using gamma ray, the strength of the beam has to be assessed so that there is the maximum difference in the attenuation rate between the gas and liquid phases. In the case of this experiment, Mishima, Hibiki, & Nishihara used thermal neutrons in the region of 2 eV with a neutron flux of 1.5 × 10 ଼ ܿ݉ ିଶ ‫ݏ‬ ିଵ (Mishima et al. 1997). This technique produced images of slug flow, bubbly flow, churn flow and annular flow; the team were able to calculate the void fraction for each of these flow regimes, with an error of as little as 5%. What is interesting from this paper is that the technique can be used to show the change in the flow regime in real time as well as in slow motion, and can calculate the void fraction in each time frame.
The following year they published more work further exploring neutron radiography with high frame rates. Using a similar setup as before they looked into how the effect of neutron scattering affects the accuracy of the images of the void fraction (Mishima and Hibiki 1998); the following equation denotes how the error is related to the neutron radiant fluence, which is the radiant energy received by a surface per unit area as in Equation 1. (Mishima and Hibiki 1998) By integrating N consecutive frames the error can be decreased by E' (Equation 2). (Mishima and Hibiki 1998) Many of the experiments past this point for neutron image methods focus on examining flows through a fuel bundle in a nuclear power plant (Takenaka and Asano 2005); (Mor et al. 2009) differed from this approach and looked into the same area as Mishima, Hibiki, & Nishihara (Mishima et al. 1997); they used a machine called Trion generation 1, made by Mor et al (Mor et al. 2009). The machine used high frame rate neutron radiography combined with the option of adding gamma-ray radiography to further inspect an object (Mor et al. 2009); Zboray et al. (2014) used the third generation of the Trion ( Figure 5) without the gamma radiography to measure a bubbly flow system using polychromatic fast neutron beam line, which produces neutrons from the d + Be reaction, this gives neutrons with a fluence averaged neutron beam energy of near 5.5 MeV. Slug and bubbly flows were recorded by the computerised fluid dynamics device showing the instantaneous gas volume fraction for every 30 ms; the following year, the same experiment was improved upon (Zboray et al. 2015), whereby the exposure time was reduced to 0.33 ms, which was combined with looking at different post processing techniques to further show abilities of the high speed method.

Electrical Capacitance Tomography (ECT)
Electrical capacitance tomography gives a low resolution complete cross-section of what the flow regime is. It works upon the theory that in the case of an air and water mixture or wet steam, as pressure is increased, the permittivity of the different phases decreases; this change is greatest in the vapour phase. Thus as pressure is increased, there is a greater difference between the permittivity of the two phases and thus a greater change between the capacitance of the phases present in the mixture; it is by measuring these changes that an image is reproduced of the flow regime.
A clear example of how the system works was published by Yang et al. (2004), who wrote a paper which outlined how an ECT sensor is set and calibrated to look into the distribution of water droplets in a wet gas flow with a dryness fraction from 0.8 to 0.05, seeing how the experimental permittivity distribution fitted to three different types of model which predicted water distribution within the sensor based upon permittivity distribution; the three types were Maxwell, series or parallel model. The sensor they used was set with four pairs of opposite electrodes, connected to an image reconstruction circuit. The design of the sensor is shown in Figure 6. The sensor was calibrated by inserting solid foam with a known permittivity into the testing area, then 5 mm holes were cut into the foam and filled with a higher permittivity material with a relative permittivity of 7; the reconstructed image on the screen then showed the approximate area where there was a permittivity change. At the end of this testing, the sensor was in an actual test rig called Twister, which was designed to simulate wet gas flowing through the sensor. It was found that the sensor could measure tiny changes in relative permittivity of between 1 and 1.06 and was ideally suited to measuring wet gas flows from 0.95 to 0.05 dryness fraction. Shafquet et al. (2010) looked into an ECT system which is calibrated correctly to measure the void fraction in different types of bubbly flow; nine different cases of bubbly flow regimes with different gas and liquid speeds were tested. The team modelled the void fraction distribution using the same three different models: parallel, series and Maxwell. They found that superficial gas velocity is a critical component that can affect void fraction significantly, as they showed that as the gas velocity increases the void fraction increased to a maximum value of 31% (Shafquet, Ismail and Karsiti 2010). Following on from this calibration work the team used an ECT sensor with 12 electrodes to estimate the size of a void fraction inside a bubbly flow of air and deionized water. The same setup as before was used with different speeds of gas and liquid passing between the sensors. The results from the experiment backed up their earlier work that the superficial gas velocity is directly linked to the void fraction; they moved on to trying to optimise methods for calculating the void fraction from the ECT image, looking into using three different techniques: estimation of void fraction using ECT, estimation of void fraction from differential pressure (delta p) method arrangement and estimation using photographs and calculation using normalised pixels. It was found that delta p and the calculation using normalised pixels were in good agreement; further calibration work with the sensor using deionized water and air showed the predicted differences for measurement of the void fraction between the three different models (Shafquet and Ismail 2012).
The work from Yang et al. (2004), Shafquet et al. (2010) and Ismail et al. (2011) showed that results could be changed or made inaccurate based upon how the sensor was built, as well as how the electrodes were arranged. Brook et al. (2014)   To analyse this they built a sensor that combined these two properties and allowed the use of adaptive electrode measurement. This sensor is composed of three meters that contains two electrodes in meter 1 four in meter two and six in meter three ( Figure 8). The different electrode configurations allowed for a comparison between the sensitivity of different amounts of electrodes; the team found that as the number of electrodes increases from 2 to 6, the range of measurable capacitance values decreases from 67-69 pF for two electrodes to 50.8-60.5 pF for six electrodes; but that details can be visualised with a greater number of electrodes; they concluded that, for small bubbles of 1mm, a four electrode system is optimal.
For wet steam measurement, there would not be a difference between using Chordal or Diametric tomography; the main difference would come from the resolution generated from the amount of electrodes used and what droplet size could be measured. A similar system to (Brook et al. 2014) would be ideal as different amounts of electrodes in parallel along the same pipe could measure a range of droplet sizes.

Ultrasonic tomography
Using ultrasonic emitters in an array around the tube, is a technique called ultrasonic tomography. Some of the earliest investigations into the technique were conducted by Asher (1983) and by Hoyle (1996). Building upon this Ruzairi et al. (2004), studied how to implement an ultrasonic tomography device for the measurment of a multiphase flow; the team used 16 pairs of transmitters and recievers ( Figure 9) to build up a cross-section of the flow in real time using the technique by Li and Hoyle for spectral analysis strategy, which examined the phase information of reflected ultrasonic signal detected by a transducer (Li and Hoyle 1997). The paper (Ruzairi et al. 2004) highlighted the problems with the technique, such as the total reflection by liquids inside the flow which give errors, as well as the technique being suited to flows with a high void fraction due to the large difference in density.
A practical application of the technology was explored by Abdul et al. (2011), where they used a different algorithm to visualise multiphase flows as well as mixtures with solids, liquids and gases. This was achieved by combining all the reflections together, which were received at each transducer. An initial test was carried out with a fixed length of pipe filled with liquid. When this process was applied, it converted the signals into a concentration matrix, which was then converted to pixels using linear back projection to produce a tomogram of the cross section of the pipe. Their results showed that it was an accurate method to measure different flow regimes, but they did not measure annular flow (Abdul et al. 2011).
Taking the method further, the team looked into the linear back projection algorithm to build up actual real time images of gas bubbles inside the flow (Mohd et al. 2012); their results showed where any errors would arise from with the technique, and they were able to map images of multiple gas bubbles of different sizes; the image in Figure 10 shows how two gas bubbles have been mapped using the LBP algorithm.   detailed how this technology works and the algorithm they used to process all the reflections from each of the waves produced by the multiple transducers. The algorithm they use is based upon eclipses, and how the reflections from the waves from each set of transducers within an eclipse are picked up by another set of transducers within another eclipse . The paper states that, by increasing the number of transducers used, the resolution of the image can be improved. With 80 transducers details in the images can be seen at less than 500 µm.

Film Thickness Measurement Techniques
One of the key parts in being able to measure the overall mass flow of steam is being able to measure the liquid content, since wet steam can have annular flow structure, therefore there will be droplets present in the core of the flow and a liquid film of varying thickness flowing along the inner wall of the pipe; both of these amounts of liquid require measurement in order to calculate the mass flow of steam. From a review by Cristiano et al. (2010), whereby different film measurement techniques were classified based upon their core technology, the following classes arose: optical, acoustic, electrical and nuclear. This paper uses these classifications as a basis to find the relevant literature, then critically analyses techniques that are non-intrusive so as not to disturb the flow and are able to measure films of less than 100 µm thick.

Photons
One of the more straightforward ways of measuring a film of liquid is by measuring changes in photons' attenuation rate as they pass through a material; the amount of energy the photons have will determine what photon behaviour will be tested in order to gain results. In the case of measuring high energy photons such as gamma rays, the measurement works around the principle of the difference in the attenuation coefficient between solids, liquids and gases, meaning that radiation in the form of gamma rays and X-rays moves through solids and liquids slower than they would pass through a gas. Therefore when beams of radiation are aimed at a two phase flow, the difference in the attenuation coefficient will highlight the size of the void fractions, and be able to pick out areas of high liquid content such as a film.
Abouelwafa and Kendall first summarised advancements in alpha particle tomography in 1980 (Abouelwafa and Kendall 1980). The first and most important point they realised was that there has to be a significant difference between the attenuation coefficients of the different materials involved. They used this information to work out how to calculate the thickness of different phases inside a multiphase flow, such as a two phase annular flow; equation 3 details what they calculated for measuring the thickness of two different phases inside such a flow.  (Ref), to build their own gamma ray densitometer which could accurately measure void fractions inside a small diameter pipe; they used an air and water mixture and tested the device on slug flow and annular flow (Jiang and Rezkallah 1993). The densitometer works by firing a beam of γ rays produced by caesium 134 at a target section of pipe, a detector at the other side of the pipe measures the transmitted intensity of the beam.
No measurements were taken of the thickness of the film, but the technique was used to calculate the overall void fraction of the flow for slug and annular flow patterns using the densitometer. What was also found to be of interest was that the strength of the gamma stream in the test section affected the accuracy of the measurements, thus they chose to use a clear plastic pipe as the test section, as it allowed the use of a less powerful gamma source which gave a greater difference in the attenuation coefficient. Later, Zhao et al. (2013) performed the same experiment using a very similar setup, testing on a variety of different flow regimes, ranging from bubbly flow to annular flow; they were also able to record that there was liquid film present and calculate the speed of the film being close to the results presented before (Zhao et al. 2013).
Stepping down in energy from gamma, techniques that utilise the properties of X-rays have also been shown to be effective in measuring the thickness of thin films; a review by Chason and Mayor (1997) critically analysed the use of specular X-ray reflectivity (XRR) which can measure film thicknesses of different materials from 0.1 to 100 nm (Chason and Mayor 1997); they also explored the best way to measure a liquid film using X-rays by use of a flow visualisation method.
In conclusion using techniques that can measure the thickness of a liquid film using gamma and Xrays are not largely explored. As demonstrated earlier in the review in section 2.1, they are much more suited to measuring the overall void fraction of the flow Optical techniques that use photons in the visible spectrum have been shown to be more effective in measuring a film of liquid. Light attenuation is the same as using gamma rays to exploit the property that some photons will be absorbed as they pass through the flow. Because lower energy photons are used, there will be a greater difference in the attenuation; a laser is used for the photon source which will channel the photons across the flow coherently. Utaka et al. (2004) used this technique to measure the liquid film thickness as a gas bubble passes between the detector and emitter. They passed a bubble with a diameter of 0.5 mm between the two at different velocities giving a slug flow (Utaka et al. 2004), measuring the time it took the bubble to pass and the change in thickness; therefore, the film thickness was calculated using Lambert's law (Equation 4).
Equation 4: (Utaka et al. 2004) Using this equation, the film thickness could be calculated from the setup illustrated in Figure 11. The results of the experiment showed that a film thickness was measured for different bubble velocities and liquid temperatures; they showed that, for a bubble of 0.5 mm with a velocity of 0.38 m/s and a laser area concentration power of 2.5 ܹ݇ /݉ ଶ , a film thickness of 5 µm was recorded.
Increasing the power to 6.7 ܹ݇ /݉ ଶ with a velocity of 2.8 m/s gave a much larger film thickness of between 25 and 20 µm. These results showed that as the bubble takes longer to form the film thickness can be seen to decrease.
Other behaviours that can be observed are reflection and refraction caused by photons traveling through mediums with different densities. This is part of the general behaviour of photon scattering. By relying on this property, different phases of matter within the annular flow can be detected, by either seeing where the boundary is between the gas and liquid phase thus highlighting whether a film of liquid is present, or by detecting where there are high levels of photon scattering off the surface of the liquid in the flow again detecting the presence of a liquid by reflection. This principle was exploited by Yu et al. (1996) who used a fibre optic sensor built into a trough to measure how light was reflected back from the liquid film phase boundary (Yu et al. 1996). This setup was then used to calculate the variation of light intensities across the surface of the film. This technique was mainly used to determine how to track the surface waviness of the film while the film thickness was predetermined: the reason for this was to see at what thickness the film had to be where the light receiving fibres could still obtain a signal of light intensity. They found that the level of light sensitivity was good from 0.5 mm to 1mm film thickness, but beyond 4mm the light intensity dropped off quickly.
In addition to this Oliveira et al. (2006) used the technique with fibre-optic cables with receiving sensors set 3mm apart to measure the film of liquid in a trough. They first created a theoretical model of reflected light intensity predicting that films less than 500 µm thick could be measured; however they decided to run the experiment to measure films down to 1.5 mm which accurately matched the model, as well as concentrating on trying to measure thicker films up to 4mm; overall using photons with wavelengths in the visible spectrum as a film measurement technique can be very accurate and allow for measurement of films less than 100 nm (Oliveira et al. 2006).

Subatomic Particles
Switching from photons to using neutrons, the advantages are a longer recording time, no trigger signal and a very high frame rate. The process works in the same way as photon techniques, by having a beam of neutrons pass through the test section. Sensors on the other side detect the amount of scattering and from this, build an image of the flow regime. Just like with gamma rays, it is very important that the correct source is chosen. Neutrons must have a sufficiently high neutron flux or concentration per unit time and volume. Secondly there are two types of neutrons that are used: cold neutrons with energies of up to 0.025 eV and thermal neutrons with energies above 0.025 eV. These two measures combine together to calculate the attenuation coefficient of the neutrons. In order to generate the correct amount of neutron flux, the correct neutron source must be used; these vary in size from small spontaneous fission materials to large high energy reactors. In most cases to gain results with a higher accuracy, the flux should be as high as possible, therefore the source should be as large as practical for use with the experiment.
Research nuclear reactors rely upon the fission process (Equation 5) to produce a constant stream of neutrons.
ܷ ଶଷହ + ݊ → ܺ + ܻ + 2.5݊ Equation 5: Fission process for producing neutrons from uranium 235, where X and Y are fission products (Arai and Crawford 2009) These sorts of reactors can produce up to 8 × 10 ଵସ ݊ ܿ݉ ଶ ‫ݏ‬ ⁄ of neutron flux for a 20MW reactor such as the FRM 2 (Forschungreaktor Munchen) and can be made quite compact. On the other side of the scale there are small spontaneous fission materials which have a low neutron flux, such as uranium-235 in which small isolated samples are used in the process above, or other heavy elements which have an unstable nucleus such as plutonium or californium.
Referring back to section 3.2 Mishima and Hibiki used neutron radiography to estimate the void fraction using the total microscopic cross-section scaling method (Mishima, Hibiki and Nishihara 1997), (Mishima and Hibiki 1998); a normalised grey scale was created from neutron scattering in which the void fraction was then calculated from, they obtained images for annular flow at different speeds where they were able to show different film thicknesses of at least 5mm, this film thickness was obtained by the rearrangement of equation 6 shown below.
: Grey scale formula where ‫߶ܥ‬ ௧ is the incident neutron flux, Σ is the microscopic cross section of the liquid phase, ߜ is the thickness of the liquid phase and ‫ܩ‬ is the offset term of the greyscale (Barthel et al. 2015).
The results gave calculated film thicknesses of between 0 and 7mm where they were compared against the film thickness derived from observing the images produced from the microscopic crosssection scaling method giving an error of only 0.23% (Mishima and Hibiki 1998) Zboray and Prasser (2013) used neutron radiography to measure film thickness in a two phase annular flow inside a fuel bundle; they setup a flow system whereby using water and air they were able to create a multiphase annular and slug flow.
The results from this experiment gave clear recordings of how the void fraction changed in accordance to the change in the velocity of the flow. What is shown is how the film thickness is calculated from the attenuation coefficient; this gives a minimum detectable film thickness for the technique of 25-30 µm. The maximum detectable film thickness was also calculated, which came out to be 5 mm. Finally the paper gives detailed graphs, showing the height of the film at various points along the channel and the film is between 0.05 mm and 1 mm thick.
To conclude, from the evidence using subatomic particles, they are ideally suited to detecting and measuring the thickness of a film that is present in an annular two phase flow of a water and air mixture, however from the literature there is very little work of actual film measurement of annular flow using neutron radiography, the downside to using neutrons is the neutron source, as it requires the use of a reactor to produce a constant stream at the required energy level (Mason et al. 2000) 3.3 Conductance Conductance techniques rely upon the simple principle of having a gap between two electrodes with an electric potential difference between them; as a flow passes through, the conductance will directly change due to the size of the liquid film entrained within the water and air flow. Coney developed a probe with two electrical strips mounted in the surface of the pipe separated by an insulating material (Coney 1973). A few interesting results were derived from his experiments, the main one being that he could measure films of water of less than 1mm and to a maximum of 2.5mm, resulting in the fact that he found the accuracy of the measurements was directly related to the construction of the electrodes. This indicates that spacing the electrodes less than 1mm apart with adequate insulating material gave the degree of accuracy required.
Following on from these results, Thwaites, Kulov and Nedderman (1976) looked into measuring the wave velocity of films of liquid inside annular flow; they used an electrode probe design developed by Thwaites as part of his PhD thesis (Thwaites 1973). The probe section of the experiment consisted of two sets of probes set 1.685m apart. They measured different film thicknesses for different liquid velocities; they found that as the liquid velocity was increased, they recorded larger film thicknesses ranging from 0.1mm to more than 0.3mm. A possible explanation was given that as the liquid rate increased more mass of water was carried in the larger waves being produced.
The previous methods of conductance techniques all used an AC power source, however Tohru Fukano looked into new ways of measuring conductance using DC; he came up with different methods of using electrodes to either measure local film thickness in a particular area or for measuring the overall void fraction inside the test section (Fukano 1998). The first method used two electrodes which were brass rings connected to the test section, placed 30mm apart and flush with the pipe wall. These ring electrodes were used to deliver direct current to the test section, while two sensor electrodes situated at the base of each of the rings recorded the change in conductance.
This type of sensor gave excellent results when applied to measuring the average void fraction of the area such as with slug flow. The second method he developed was for measuring local film thickness; it worked upon the same principle as the earlier probe techniques. The design consisted of a central rod inside two rings, the central rod and the outer ring acted as power electrodes while the middle ring and the central rod were used to sense the change in conductance (Fukano 1998). This type of sensor gave good results when applied to local film thickness; it was able to measure peaks and troughs in the waves of the film down to a thickness of 0.12mm (Fukano 1998). Ito et al (2014) used a hybrid method where they combined a liquid film conductance sensor with void fraction measurement using neutron radiography. The conductance sensor was used to measure the liquid film by Daisuke et al. (2014), combining the film and void measurement techniques together in the following setup: From this, they first took images of the multiphase flow and estimated the thickness of the film from these; they then measured the film using the conductance sensor and compared them to the estimated results from the images. They saw that the estimates were very close with a measured film of up to 100 µm.

Capacitance
Capacitance techniques work upon the principle of measuring the change in capacitance between different phases inside a mixture. This capacitance is a function of the geometry of the conducting material and its dielectric permittivity. At a basic level, a capacitance sensor directly measures the capacitance between plates that are a set distance apart.
A non-intrusive capacitance method was first proposed by Ozgut et al. (1973), who designed a capacitance probe with two electrodes flush to the edge of the pipe wall. This system worked on the simple principle that when a current was put across the two electrode plates a set capacitance is measured. When there is a liquid and gas mixture in the gap, the capacitance changes. Calibrating the film thickness to a set capacitance will mean local film thickness can be measured based upon the size of the capacitance between the plates.
The results from these tests showed the volume of liquid in the film for different lengths of bubbles passing through the pipe. It was found that as the bubble length increases, the length of the film increases and so there is a greater volume of liquid on the edge of the pipe. Although the paper mentions it is possible to measure local film thickness, no results were shown.
Further advancement of this technique was published by (Klausner et al. 1992) where they used two sets of plates: one set at the top wall of the pipe and one set at the lower wall of the pipe. These sets can then independently measure the film thickness in their local regions by measuring the change in capacitance between each set of plates (Klausner et al. 1992). This test was performed on a refrigerant R113 and different temperature ranges.
Due to the change in the mixture, the films that were measured were of an order larger than what would be recorded from annular flow with air and water. This meant that films were measured of between 1mm and 25mm. However, this still means that the sensor was accurate as it had an average error of as little as +-2%. Damsohn and Prasser (2009) used a capacitance sensor which takes 1 dimensional measurements which is different from an electrical capacitance tomography as it does not calculate void fraction or show a full cross section of the flow. This sensor was made up of rows of small electrodes whereby the overall average potential field was measured for a film of water passing over each of them. The setup used a glass channel in which air from a fan was blown down while a module injected water on all four sides: The technique was able to measure films from less than 100 microns all the way to 1mm before the potential field was saturated; what is interesting about the technique is that, if one line of electrode sensors are chosen over a set period of time, a profile of the wave structure of the film can be visualised, this technique shows a film structure with a core film of 700 µm: Another capacitance method to measure film thickness of an annular flow was conducted by Li et al. (2005), who measured the thickness of a film forming inside a thermosythen using evaporated water vapour. They used an ECT sensor 12 electrode setup giving 66 individual measurements, to give a cross-sectional view of the flow regime. Once they had the experimental data, it was compared to a model using the thermal conductivity of the fluid, this was derived from the equation for calculating Nusselt's number:  (Li et al. 2005).
By performing a rearrangement of equation 7, the thermal conductivity of the fluid can be calculated. This dimensionless number forms the basis for Nusselt's equation for calculating a film of liquid on the inside of pipe, as Nusselt in 1916 modelled the flow of liquid on the pipe wall and made the simple assumption that there was no shear at the gas-liquid interface, thus he devised the following equation for film thickness: (Li et al. 2005).

Equation 7: Nusselt's equation for modelling film thickness, where µ is the dynamic viscosity, k is the thermal conductivity; L is the flow length, T is the temperature, g is the gravitational acceleration, h is the heat of vaporisation, ρ is the density, l is the liquid, v is the vapour and w is the wall
Film thicknesses were recorded from 0.08 mm up to 0.27 mm, however the paper does not show what the uncertainties are with the measurements, so the accuracy of these results is unknown. These results of the film thickness of annular flow are also similar to tests completed by Cui et al. (2014) on oil film thickness in an annular flow, where they used eight electrodes of four pairs. Their results were also able to measure thicknesses of between 100 to 500 µm, with measured capacitances in the pico farad range.

Ultrasonic
This category of techniques relies upon the properties of ultrasonic waves that are above the audible range of human hearing; they are above 20000 hertz. The typical frequency used in materials testing is much higher beyond 20 MHz. This class of techniques first saw experimental use in 1955, in which they were mainly being tested by oil companies looking into finding a way to determine if there was a film of water in the oil flow (Lynnworth and Liu 2006). From that point on the technology has slowly been reduced in scale so that it can be attached to smaller and smaller pipes.
The basis of the technology is by using a transducer fixed to a length of pipe to generate ultrasonic waves, relying on the principle that the speed of the wave will change as it passes through one material to another, this is calculated with the following equation:

Equation 8: Equation to calculate the speed of sound through different materials, where c is the speed of sound, ρ is the density of the material and k is the specific heat.
Because the speed of the wave changes as it goes through one material to another, the next property of waves comes into effect; this is refraction, where the frequency of the wave is constant. Then as the speed changes so must the wavelength thus refraction occurs, this is calculated using snell's law:

Equation 9: Snell's law (Ultrasound 2006)
Sanehiro Wada, Hiroshige Kikura and Masanori Aritomi explored this effect as ultrasonic waves are passed through a plate submerged in water and measured what the best angle was to gain the maximum sound pressure using a longitudinal or a shear wave (Wada et al. 2006).
Interestingly there are two peaks showing where the incident angle is optimal for a longitudinal wave at 0 degrees and for a shear wave at 45 degrees. They then needed to know which type of wave to use, since the transmission ratio for ultrasound depends on the difference between the different wave types, this showed that longitudinal waves were better for plexiglass, while sheer waves were better for opaque mediums like carbon steel. This was backed by earlier experiments by Morala et al. (1983), who also found that sheer waves were best for carbon steels. Wada et al. (2006) tested their technique on annular flow running through a carbon steel pipe, they developed a method of testing the flow by measuring the spatio-temporal distribution of the ultrasound echoes.
A thin film of liquid has formed in the annular flow by the appearance of the dark lines. You will notice that in condition C, the lines are more positively slanted and longer than in condition D; this indicates that there is thicker film in C which is moving at a higher velocity. This is an improvement upon previous attempts to measure a wavy film in fast flowing annular flow, as a technique conducted by Lu et al. (1993) where they had difficulty with wavy films due to the reflection of the waves [70], this was further supported by Pedersen et al. (2000), who could only measure the thickness of planar films [71]. The reviews all stated that the average film thickness found by the various experiments was between 50 and 500 µm; this technique allows the film thickness to be measured to the correct depth, but it does not give an overall view of the annular flow.

Droplet Phase Measurement Techniques
The previous two groups of techniques, cross-sectional void fraction and film thickness measurement, are mainly used to measure the steam before it enters the low pressure turbine. This is to ensure that steam is of a certain quality and that the moisture separator has removed any liquid. However, once the steam is inside the turbine, it is imperative that continuous monitoring of quality is maintained. By the beginning of the 1960's, the process of water vapour condensation was quite well understood; an analysis conducted in 1954 highlighted the progress of research into water vapour condensation in high speed flows. The research by Wegener (1954) allowed for the prediction of the nucleation rate of droplets within the condensing flow. This also gave rise to a method of calculating thermodynamic properties using only the static pressure and the channel geometry. When water droplet formation was applied to steam turbines it indicated methods by which the erosion of the turbine blades can occur. This process was further explored by Gardner (1963) whereby the history of the research was reviewed and possible explanations for erosion patterns were given. Some of the most important findings, were the calculations and experiments that indicated that large droplets are mainly produced by the stators; this meant that due to their size, they do not flow within the steam flow over the rotor blades, and instead have an impact velocity that becomes the main cause of droplet erosion.
Therefore, measurement of this droplet phase is extremely important to be able to better understand the causes of heterogeneous and homogeneous nucleation, as well being able to better predict the effects upon the low pressure turbine over time, and to be able to calculate the liquid fraction of the steam. The current knowledge of droplet phase measurement within a confined and complex moving geometry, as distinct from void fraction measurement in a pipe, is compiled into the following sections: photon attenuation and scattering; direct optical measurement; Electrostatic and microwave.

Photon Attenuation and Scattering Based Probes
Photon based probes can use a number of techniques such as forward scattering spectrometry, Xray and visible photography, attenuation methods or techniques using holographic and microvideographic probes. These methods can then be split into two groups: those that look at the flow locally, droplet by droplet, and those that look at the difference in the scattering of light caused by different sizes of droplets. Focusing on light scattering based probes, the technique is best suited for fine droplet sizes.
Early investigations focused on calibrating the equipment using metal and metal oxide clusters to represent different droplet sizes. Moses and Stein (1977) used laser light scattering to measure droplet growth at the phase of homogeneous nucleation. They found excellent agreement throughout the measurements with nucleation rate expression and the droplet growth model by Gyarmathy (1963) and (Gyarmathy and Meyer 1965). This was expanded further by using Forward Scattering Spectrometer Probes (FSSP) developed by Knollenberg in 1981. They published a series of papers whereby the use of the probes on droplets of different sizes and speeds was analysed (Dye and Baumgardner 1984). Using a 632.8 nm laser beam for each probe, they demonstrated measurement differences between various probe applications, including scattering angles over which light is collected and the focal length of the collecting lens. This process highlighted uncertainties with calibration and showed that limitations of the probes are down to electronic response, optical resolution and calibration uncertainties. The last paper in the series concentrated on time response and the laser inhomogeneity (Baumgardner and Spowart 1990). The findings demonstrated that as airspeeds exceeded 50 ݉ ‫ݏ‬ ିଵ the size of the droplets were underestimated and the size distributions became broader due to the non-uniformity of the laser beam. To counter these uncertainties an improved version was developed, the fast Forward Scattering Spectrometer Probe (FSSP). Brenguier et al, (1998) tested the fast-FSSP on droplet size distribution to see how the accuracy was improved. Each of the previous uncertainties were analysed, showing improvements upon the standard FSSP. Additional variables such as pulse duration and interarrival times between measurements were shown to be useful for further improvement of the sizing of the particles. Finally, there was a clear distinction between the standard FSSP and the improved FSSP, whereby the improved could show higher concentrations of droplets within a smaller range.
In 2005 the Forward Scattering Spectrometer Probe -100 (FSSP -100) was tested to produce procedures that could correct instrumental artefacts in droplet measurement (Coelho et al. 2005). The correction procedures were tested with a model that reproduced some of the key features. An improved system was developed that was based upon measuring the attenuation rate of light, and had a pressure measurement tip; this probe was used to measure local wetness, total wetness of exhaust steam and the size distribution of fine droplets (Cai et al. 2009). The measurement head has four holes and is wedge-shaped. Figure 13 shows the probe can be rotated within the flow at 0.5 degrees and inserted by steps of 0.5 mm. Figure 13: The portion of the FSSP-100 probe for droplet measurement on the left and the head of the probe for wetness pressure measurement on the right (Cai et al. 2009) The combined probe was able to show that the local wetness in the middle region of the blade is twice that of the hub and tip regions. The probe gave the mean diameter of fine droplets to be 0.8 ߤ݉ . Another novel method for measuring light scattering caused by droplets was developed, called an Optical Backscatter Probe. The probe has the ability to measure droplets from 40 to 100 ߤ݉ at speeds of up to 200 ݉ ‫ݏ‬ ିଵ (Ilias et al. 2016). A droplet generator was used to calibrate the probe, the error with size was ±4.7 ߤ݉ with a speed error of 2.3 ݉ ‫ݏ‬ ିଵ . The probe was used in an axel turbine test facility (LISA), where it showed that as airflow increased, the droplet diameter decreased. Finally if the turbine has a part load condition then droplet velocity increases by 40% and erosion can increase by as much as 32 times. Therefore the advantages with this type of probe is that it is possible to measure single droplets and steam wetness together. Coupled with being able to rotate and alter the position of the probe, using a suitable computer system it would be possible for real time measurements.

Direct Optical Measurement Systems
Another way of optically viewing the flow patterns and droplets, can be by individually viewing them using photography; an early investigation into two phase flow pattern photography was conducted by Hewitt and Roberts who looked at X-ray radiography and flash photography (Hewitt and Roberts 1969). They were able to identify the flow regimes plug, churn, annular and wispy annular flow. An advancement of the process is holography, where a particle is illuminated by a laser and a sensitive film records the light scattered by the object of a reference beam. The process was investigated in depth on a wide variety of experiments. Belz and Menzel (1977) analysed particles and liquid droplets using holography. In particular they compared the quality of the reconstructed data with the effect on image quality. Their results showed that turbulence was an issue which caused a mottling effect on the background and caused difficulty recognising objects. This highlighted the problem in a low pressure turbine of taking images of a foggy, high density, very turbulent flow. To overcome this Kleitz and Courant (1989) used a more advanced holographic probe that analysed the images of coal particles using a video circuit which displayed the results in a histogram. The team also used a "Microvideographic" probe, which they developed to be used in higher droplet densities and could be made much simpler, due to working with white light and not having to use a ruby or YAG (Yttrium Aluminium Garnet) laser.
Due to the discussed limitations of holographic probes, a better method was required to measure high density droplet flows with high turbulence. Photographic methods were shown to be useful in validating condensing flow theory (White et. al. 1996). A previous review of wet steam measurement techniques (Kleitz and Dorey 2004) highlighted the use of shadowgraphy and the working principles of the Microvideo probe.
Shadowgraphy works due to droplets appearing as dark spots on a clear background; the blurriness of each droplet depends on how out of focus the droplet is and its size. A CCD camera records the images for real time analysis. The results are given by a histogram with a range of measurement between 5 and 500 ߤ݉ . The video images are analysed between 10 and 25 Hz. An extensive study into using a photographic probe that used shadowgraphy was carried out on wet steam by Vernon (2014). The work detailed the first time that LED illumination was used inside a probe, this had the advantage of removing motion blur of the droplets.
To conclude, photographic methods of measuring droplet size of wet steam, give useful insights into the effects of condensation inside a low pressure turbine. They allow for visualisation of droplets produced through heterogeneous and homogenous nucleation, giving a clear view of the different condensation regions. However, direct optical measurement has the drawback of not being able to calculate the total or local steam wetness. Other techniques such as electrical or microwave based would be more suited.

Electrostatic Probes
Early tests of droplet size measurement in water sprays were conducted in the 1960s, based upon work by Gardiner (1964); Thorpe and Wood (1967) analysed the best way to measure droplets in the low pressure stage of a 350 MW steam turbine. They decided upon using an invasive electrostatic probe, which protruded into the flow so that half an inch of wire was exposed as the detection area. Their results showed that droplets from 50-200 ߤ݉ could be detected. However, there were problems and limitations with the probe, this was because a defect of the design caused a film of liquid to alter the signal, and accuracy of large droplet sizing to become an issue. An accurate method was developed that was used to measure bubble sizes inside a bubble column (Yamashita et al. 1979). The probe worked by having two electrodes; one, a barbed tip of wire and the other, a platinum wire used as the insulated conductor. They were able to determine the shape of the bubbles and sizes ranging from 2 to 12 mm. These results were tested against a photographic method and showed strong correlation.
From this point the majority of electric probe testing goes onto liquid film measurement. However, electrical probes started to be used to analyse the volume charge density of droplets and the effects on the low pressure turbine. This property with links to steam condensation and corrosion damage was studied in depth (Kachuriner and Orlik 2007). Further study with experimental validation using an electrical probe showed the droplet charges and how the charge can change across the rotors and stators (Tarelin 2014), in which there is extensive review on the current knowledge of charge polarity of droplets and the effects on erosion. The probe is then used to measure the ionisation of the steam flow; this highlighted identification and validated the major factors that cause electrochemical corrosion of turbine blades.

Microwave Probes
Since the 1970s, microwave techniques such as microwave spectroscopy have been studied in depth to measure mixtures of gases. It was not until the 1980s and 1990s that microwave methods were truly used on wet steam or water vapour. An early study looked into using a microwave sensor to measure permittivity changes in methane caused by water vapour (Amaud et. al 1992). This work was expanded upon by measuring water vapour and ethylene oxide using a microwave spectrometer in different gas mixtures (Zhu et. al. 1996). Two problems were identified with microwave spectroscopy; absorption line broadening and gas memory. Absorption line broadening was solved by dilation with nitrogen, gas memory was solved by use of a dynamic sampling method.
These solutions were then utilised in an experiment where a microwave sensor was designed and built to detect water vapour within sulphur hexafluoride at as little as 3 ppm (Rouleau et. al. 2000).
The system worked by measuring the voltage difference using a measuring cavity resonator and a reference resonator with sulphur hexafluoride; the difference is the amount of water vapour contained. The system continued to be improved; by the early 2000's microwave cavity resonator based systems were starting to be used for measuring wet steam in turbines. An example of this work (Yongqian et. al. 2006), used a frequency tracking unit inserted into the system. This allowed for the voltage controlled oscillator to follow the resonant frequency of the resonator, filled with wet steam. Comparing the experimental permittivity results from the sensor to theoretical results showed a strong agreement between them, demonstrating the feasibility of the system. Error analysis was then performed on the system in 2010, with more improvements allowing for higher quality measurements (Zhang and Liu 2010). The team went on to analyse uncertainty in the formula for measuring the humidity of wet steam. Finally, from their results they concluded, that energy storage of an electric field equals the energy storage of a magnetic field when resonance occurs, using a formula they deduced from the theory. This formula improved the measurement accuracy more than three times.
Another method of microwave sensing is using an interferometer; this particular technique was used to measure the quality of steam at a generator plant (Jean 2007). The interferometer, shown in Figure 14 works by measuring the differential path length between two transmission lines, this determines the change in permittivity as the quality of steam changes. The device was able to measure a frequency shift from 10.346 GHz for 100% dry steam to 10.275 GHz for a steam quality of 50%. In conclusion, when microwave probes are used to measure a droplet laden flow at the final stages of a low pressure turbine the results show good correlation to the theoretical predictions. The technique is straightforward to implement, notably if the interferometer is used, the device becomes much more simple to attach to the system in-line.

Conclusion and Summary
There are several different techniques for measuring a multiphase flow system consisting of air and water; all of the methods are based upon measuring a cross-sectional area of the pipe using the characteristics of the attenuation and permittivity of the different phases, which allowed for the void fraction of the flow to be obtained. The results from each of the different techniques highlighted inaccuracies caused by a film of liquid that is mainly present in annular or slug flow.
Photon based tomography systems such as gamma and X-ray have been shown to be highly accurate when there is a large difference in the attenuation rate between the different phases of the flow. Some systems such as the one by Nazemi et al. (2015) included the calculation of velocity, which would allow for a mass flow reading. Neutron tomography shares the same advantages, however, it would require a sufficiently large reactor to obtain the required neutron flux. Both techniques share the same disadvantage that, depending on the type of pipe used the source has to be more concentrated. Therefore for penetrating a metal pipe to see the flow, the difference in attenuation rates of the phases becomes smaller by comparison to the effect of the pipe wall, reducing the resolution.
Electrical capacitance tomography also works through a metal pipe by having electrodes fitted into the pipe wall. This allows for the permittivity to be measured for a cross section of the flow, highlighting phase changes. This system was shown to be highly versatile for all types of flow, giving accurate high resolution images depending upon the amount of electrodes used. It was shown that four electrodes is desirable for measuring bubbly flow, while six to twelve electrodes is better for annular droplet flow. Ultrasonic tomography can also work through any pipe material by having the transmitters and receivers fitted into the pipe wall; the technique was shown to work for simple flows due to liquid causing a near total reflection of the sound wave (Asher 1983) and (Hoyle 1996) that leads to large errors in the measurement. All tests carried out were on a flow with single gas bubbles.
Photon and nuclear based film measurement techniques rely upon measuring the difference in attenuation between a photon or neutron sent through air and air/water to a receiver; the accuracy is increased when plastic pipes are used instead of metal; optical methods work the same way using the additional process of measuring the reflection of the photon when it hits a phase boundary. The optical system based upon reflection could be used in a metal pipe as used by Yu et al, (1996); this is the same way that ultrasonic film measurement is conducted, which produces similar results. Electrical methods work by measuring the change in either capacitance or conductance, as the flow passes over the electrode. Using two or more electrodes in sets, it is possible to measure the velocity of the film.
From the information presented in this review a possible improvement would be to combine a cross sectional tomography measurement system with a film measurement device. Ito et al (2014), used a device that combined neutron radiography with a conductance sensor to image the two phase flow and provide a visualisation of the thickness of the wavy film. Such a system allows for a real-time mass flowing reading of the multiphase flow, giving clear indication of the void fraction and the liquid content.
Linking back to the original goal of steam wetness measurement, the main issues are the steam, phase difference between the steam and water, the size of the liquid phase and the pipe material. This limits the number of possible methods that can be applied in industry, to ones which are nonintrusive, for measuring steam before the turbine. This means that they can be integrated into the pipe wall or are able to pass through the wall and can show detectable difference in attenuation rate between steam and water. A possible candidate for further experimental work could be to combine a highly accurate non-intrusive film measurement system such as the one by Oliveira et al (2006); this could be combined with an ECT system for void fraction measurement. The possible result of this combination would be a system that could measure the speed of films as well as being able to measure the overall cross sectional water content per second. This would give an accurate mass flow measuring device for steam wetness measurement.
With regards to techniques that can be used in-line on the low or high pressure turbines, photon attenuation and scattering based probes are the most reliable choice; the size of the probe and the design allows insertion into multiple stages of the turbine, clearly defining how the steam is condensing and where problematic areas with erosion could occur. Electrostatic probes have been shown from research to be most useful at highlighting the property of the steam's electric charge, and the effect this has on blade erosion (Tarelin 2014). Using a mixture of electrostatic and photon probes would therefore allow for accurate measurement of the local/overall steam wetness and the charge of the wet steam at critical points. When used in conjunction with techniques that measure steam quality before the turbines, the efficiency of the steam cycle could be further improved.