A Component Based Heuristic Search Method with Evolutionary Eliminations

Nurse rostering is a complex scheduling problem that affects hospital personnel on a daily basis all over the world. This paper presents a new component-based approach with evolutionary eliminations, for a nurse scheduling problem arising at a major UK hospital. The main idea behind this technique is to decompose a schedule into its components (i.e. the allocated shift pattern of each nurse), and then to implement two evolutionary elimination strategies mimicking natural selection and natural mutation process on these components respectively to iteratively deliver better schedules. The worthiness of all components in the schedule has to be continuously demonstrated in order for them to remain there. This demonstration employs an evaluation function which evaluates how well each component contributes towards the final objective. Two elimination steps are then applied: the first elimination eliminates a number of components that are deemed not worthy to stay in the current schedule; the second elimination may also throw out, with a low level of probability, some worthy components. The eliminated components are replenished with new ones using a set of constructive heuristics using local optimality criteria. Computational results using 52 data instances demonstrate the applicability of the proposed approach in solving real-world problems.


Introduction
Employee scheduling has been widely studied for more than 40 years. The following survey papers give an overview of the area: Bradley and Martin 1990;Ernst et al. 2004a and2004b. Employee scheduling can be thought of as the problem of assigning employees to shifts or duties over a scheduling period so that certain organizational and personal constraints are satisfied. It involves the construction of a schedule for each employee within an organization in order for a set of tasks to be fulfilled. In the domain of healthcare, this is particularly challenging because of the presence of a range of different staff requirements on different days and shifts. Unlike many other organizations, healthcare institutions work twenty-four hours a day for every single day of the year. Irregular shift work has an effect on the nurses' well being and job satisfaction (Mueller and McCloskey 1990). The extent to which the staff roster satisfies the staff can impact significantly upon the working environment.
Automatic approaches have significant benefits in saving administrative staff time and also generally improve the quality of the schedules produced. However, until recently, most personnel scheduling problems in hospitals were solved manually (Silvestro and Silvestro 2000).
Scheduling by hand is usually a very time consuming task. Without an automatic tool to generate schedules and to test the quality of a constructed schedule, planners often have to use very straightforward constraints on working time and idle time in the recurring process. Even when hospitals have computerized systems, testing and graphical features are often used but automatic schedule generation features are still not common. Moreover, there is a growing realisation that the automated generation of personnel schedules within healthcare can provide significant benefits and savings. In this paper, we focus on the development of new techniques for automatic nurse rostering systems. A general overview of various approaches for nurse rostering can be found in Sitompul and Randhawa (1990), Cheang et al. (2003) and .
Most real world nurse rostering problems are extremely complex and difficult. Tien and Kamiyama (1982), for example, say nurse rostering is more complex than the travelling salesman problem due to the additional constraint of total number of working days within the scheduling period. Since the 1960's, many papers have been published on various aspects of nurse rostering.
Early papers (Warner andPrawda 1972, Miller et al. 1976) attempted to solve the problem by using mathematical programming models. However, computational difficulties exist with these approaches due to the enormous size of the search space. In addition, for most real problems, the goal of finding the 'optimal' solution is not only completely infeasible, but also largely meaningless. Hospital administrators normally want to quickly create a high quality schedule that satisfies all hard constraints and as many soft constraints as possible.
The above observations have led to a number of other attempts to solve real world nurse rostering problems. Several heuristic methods have been developed (e.g., Blau 1985, Anzai andMiura 1987). In the 1980's and later, artificial intelligence methods for nurse rostering, such as constraint programming (Meyer auf'm Hofe 2001), expert systems (Chen and Yeung 1993) and knowledge based systems (Beddoe and Petrovic 2006) were investigated with some success. In the 1990's and later, many of the papers tackle the problem with meta-heuristic methods, which include simulated annealing (Brusco and Jacobs 1995), variable neighbourhood search , tabu search (Dowsland 1998, Burke et al. 1999) and evolutionary methods , Kawanaka et al. 2001. In very recent years, there have been increasing interests in the study of mathematical programming based heuristics (Bard and Purnomo 2006, Beliën and Demeulemeester 2006 and the study of hyper-heuristics , Ross 2005 for the problem (Burke et al. , Özcan 2005. This paper tackles a nurse rostering problem arising at a major UK hospital (Aickelin and Dowsland 2000;Dowsland and Thompson 2000). Its target is to create weekly schedules for wards of nurses by assigning each nurse one of a number of predefined shift patterns in the most efficient way. Besides the traditional approach of Integer Linear Programming (Dowsland and Thompson 2000), a number of meta-heuristic approaches have been explored for this problem.
For example, in (Aickelin and Dowsland 2000, Aickelin and White 2004 various approaches based on genetic algorithms are presented. In (Li and Aickelin 2004) an approach based on a learning classifier system is investigated. In  a tabu search hyperheuristic is introduced, and in (Aickelin and Li 2007) an estimation of distribution algorithm is described. In this paper we will report a new component-based heuristic search approach with evolutionary eliminations, which implements optimization on the components within single schedules. This approach combines the features of iterative improvement and constructive perturbation with the ability to avoid getting stuck at local minima. Similar to the ruin and recreate principle reported in (Schrimpf et al. 2000), our approach applies a new method to destroy a part of a solution and then reconstruct it.
The framework of our new algorithm is an iterative improvement heuristic, in which the steps of Evaluation, Elimination-I, Elimination-II and Reconstruction are executed in a loop until a stopping condition is reached. In the Evaluation step, a current complete schedule is first decomposed into assignments for individual nurses, and then the assignment for each nurse is evaluated by a function based upon both hard constraints and soft constraints. In the Elimination-I step, some nurses are marked as 'rescheduled' and their assignments are removed from the schedule according to the evaluating values of their assignments. In the Elimination-II step, each remaining nurse still has a small chance to be rescheduled, disregarding the evaluating value of his/her assignment. Finally, in the Reconstruction step, a refined greedy heuristic is designed to repair a partial (candidate) solution and the obtained complete solution is fed into the Evaluation step again to repeat the loop.
Our proposed approach belongs to the general class of local search methods. In particular, it is somewhat similar to the Iterated Local Search algorithm (Lourenco et al. 2002): they include a solution elimination phase and an improvement phase. However, they differ in the way in which these two phases are implemented: The purpose of elimination in Iterated Local Search is to transform one complete solution into another complete solution. This serves as the starting point for the local heuristics which follow. However, the aim of the elimination in our method is to transform one complete solution into a partial solution which is then fed into the reconstruction heuristics for repair.
The rest of this paper is organized as follows. Section 2 gives an overview of the nurse rostering problem, and introduces the general framework of our methodology. Section 3 presents our algorithm for nurse rostering. Benchmark results using real-world data sets collected from a major UK hospital are presented in section 4. Concluding remarks are in section 5.

The Nurse Rostering Problem
The nurse rostering problem tackled in this paper is concerned with creating weekly schedules for wards of up to 30 nurses at a large UK hospital. These schedules have to meet the demand for a minimum number of nurses of different grades on each shift, whilst being seen to be fair by the staff concerned and satisfying working contracts. The fairness objective is achieved by meeting as many of the nurses' requests as possible and considering historical information (e.g. previous weekends) to ensure that unsatisfied requests and unpopular shifts are evenly distributed. In our model, the day is partitioned into three shifts: two types of day shift known as 'earlies' and 'lates', and a longer night shift. Due to hospital policy, a nurse would normally work either days or nights in a given week (but not both), and because of the difference in shift length, a full week's work would normally include more days than nights. However, some special nurses work other mixtures and the problem can hence not simply be decomposed into days and nights.
However, as described in Dowsland and Thompson (2000), the problem can be split into three independent stages. The first uses a knapsack model to ensure that there are sufficient nurses to meet the covering constraints. If not, additional nurses (agency staff) are allocated to the ward, so that the problem tackled in the second phase is always feasible. The second stage is the most difficult and involves allocating the actual days or nights a nurse works. Once this has been decided, a third phase uses a network flow model (Ahuja et al. 1993) to allocate those on days to 'earlies' and 'lates'. Since stages 1 and 3 can be solved quickly, this paper is only concerned with the highly constrained second step.
The days or nights that a nurse could work in one week define the set of feasible weekly work patterns (i.e. shift patterns) for that nurse. Each shift pattern can be represented as a 0-1 vector with 14 elements, where the first 7 elements represent the 7 days of the week and the last 7 elements the corresponding 7 nights of the week. A '1' or '0' in the vector denotes a scheduled day/night "worked" or "not worked". For example, (1111100 0000000) would be a pattern where the nurse works the first 5 days and no nights. In total, the hospital allows just under 500 such shift patterns. A specific nurse's contract usually allows 50 to 100 of these. Depending on the nurses' preferences, the recent history of patterns worked, and the overall attractiveness of the pattern, a preference cost is allocated to each nurse-shift pattern pair. These values were set in close consultation with the hospital and range from 0 (perfect) to 100 (unacceptable), with a bias to lower values. Due to the introduction of these preference costs which takes into account historic information (e.g. weekends worked in previous weeks), we are able to reduce the planning horizon from the original five weeks to the current one week without affecting solution quality. Further details about the problem can be found in Dowsland (1998).
The problem can be formulated as follows. (1) i1 jA(i ) jA(i ) n   q is a jk x ij  R ks , k {1,...,14}, s {1,..., g} . (3) The constraints outlined in (2) ensure that every nurse works exactly one shift pattern from his/her feasible set. The constraints represented by (3) ensure that the demand for nurses is fulfilled for every grade on every day and night and in line with hospital policy more nurses than necessary may work during any given period. In practice, there is an acute shortage of nurses and actual overstaffing is very rare. Note that the definition of q is allows that higher graded nurses can substitute those at lower grades if necessary. This problem can be regarded as a multiple-choice set-covering problem. The sets are given by the shift pattern vectors and the objective is to minimize the cost of the sets needed to provide sufficient cover for each shift at each grade. The constraints described in (2) enforce the choice of exactly one pattern (set) from the alternatives available for each nurse.

General Description of the Component Based Heuristic Method with Evolutionary Elimination (CHEE)
The basic methodology iteratively operates the steps of Evaluation, Elimination-I, Elimination-II and Reconstruction in a loop on one solution (see the pseudo code presented in Figure 1). At the beginning of the loop, an Initialization step is used to obtain a starting solution and initialize some input parameters (e.g. stopping conditions). In the Evaluation step, the fitness (i.e. the degree of suitability) of each component in the current solution is evaluated under an evaluation function. Then, the fitness measure is used probabilistically to select components to be eliminated in the Elimination-I step. Components with high fitness have a lower probability of being eliminated. Furthermore, to escape local minima in the solution space, capabilities for uphill moves must be incorporated. This is carried out in the Elimination-II step by probabilistically eliminating even some superior components of the solution in a totally random manner.
The resulting partial solutions are then fed into the Reconstruction step, which implements application specific heuristics to derive a new and complete solution from partial solutions.
Throughout these iterations, the best solution is retained and finally returned as the final solution.
This algorithm uses a greedy search strategy to achieve improvement through iterative perturbation and reconstruction. Obtain an incomplete solution S (t ) ; /* The Elimination-II step Randomly eliminate some components from S (t ) ;

/* The Reconstruction step
Add new components into S (t ) to make it complete; If (C(t) is better than C best ) C best = C(t); t = t+1; In summary, our methodology differs from some other local search methods such as simulated annealing (Kirkpatrick et al. 1983) and tabu search (Glover 1989) in the way that it does not follow one trajectory in the search space. By systematically eliminating components of a solution and then replenishing with new components, this algorithm essentially employs a longer sequence of moves between iterations, thus permitting more complex and more distant changes between successive solutions. This feature means that our method has the ability to jump quite easily out of local minima. Furthermore, unlike population-based evolutionary algorithms which need to maintain a number of solutions as parents for offspring propagation in each generation, this method operates on a single solution at a time. Thus, it should be able to eliminate the extra CPU-time needed to maintain a set of solutions.

A Component Based Heuristic Procedure with
Evolutionary Elimination for Nurse Rostering The basic idea behind the method is to determine, for each current schedule, the fitness of shift patterns assigned to individual nurses. The process keeps the shift patterns of some nurses that are well chosen (having high fitness values) in the current schedule and tries to replace the shift patterns of other nurses that have low fitness values. To enable the algorithm to execute iteratively, at each iteration, a randomly-produced threshold (in the range [0, 1]) is generated, and all shift patterns whose fitness values exceed the threshold are labelled as "good patterns" and survive in the current schedule. The remaining shift patterns are labelled as "bad patterns" and do not survive (become extinct). The fitness value therefore corresponds to the survival chance of a shift pattern assigned to a specific nurse. The "bad" shift patterns are removed from the current schedule and the corresponding nurses are released, waiting for their new assignments by a constructive heuristic. Following this, the above steps are iterated. Thus the global scheduling procedure is based on iterative improvement, while an iterative constructive process is performed within.

Initialization
In this step, an initial solution is generated to serve as a seed for its iterative improvement. It is well known that for most meta-heuristic algorithms, the initialization strategy can have a significant influence on performance. Thus, normally, a significant effort will be made to generate a starting point that is as good as possible. For nurse rostering, there are a number of heuristic techniques that can be applied to produce good starting solutions.
For our methodology, due to the fact that the replacement rate in its first iteration is relatively high, the performance is generally independent of the quality of the initial solution. However, if the seed is already a relatively good solution, the overall computation time will decrease. Since the major purpose of this paper is to demonstrate the performance and general applicability of the proposed methodology, we deliberately generate an extremely poor initial solution by randomly assigning a shift pattern to each nurse. The steps described in section 3.2 to 3.5 are executed in sequence in a loop until a stopping condition (i.e. solution quality or the maximum number of iteration) is reached.

Evaluation
In this step, the fitness of individual nurses' assignments, based on complete schedules, is evaluated. The evaluation function should be normalized and hence can be formulated as where E i are the shift pattern assigned to the i-th nurse, n is the number of nurses, f 1 (E i ) and f 2 ( E i ) is the contribution of E i towards the preference and the feasibility aspect of the solution respectively.
f 1 (E i ) evaluates the shift pattern assigned to a nurse in terms of the degree to which it satisfies the soft constraints (i.e. this nurse's preference on his/her assigned shift pattern). It can be formulated as where p ij is the preference cost of nurse i working shift pattern j and p max and p min are the maximum and minimum cost values among the shift patterns of all nurses on the current schedule, respectively.
f 2 (E i ) evaluates how far the shift pattern assigned to a nurse satisfies the hard constraints (i.e. coverage requirement and grade demands). This can be formulated as where c ij is the coverage contribution of nurse i working shift pattern j and c max and c min are the maximum and minimum coverage contribution values among the shift patterns of all nurses on the current schedule, respectively.
In a current schedule, the coverage contribution of each nurse's shift pattern is its contribution to the cover of all three grades, which can be calculated as the sum of grade one, two and three covered shifts that would become uncovered if the nurse does not work on this shift pattern. Therefore, we formulate c ij as

Elimination-I
This step is to determine whether the i-th nurses' assignment (denoted as E i , i {1,..., n}) should be retained for the next iteration or whether it should be eliminated and the nurse placed in the queue waiting for the next rescheduling. This is done by comparing his/her assignment fitness F(E i ) to a random number r s generated for each iteration in the range [0, 1]. If F(E i ) ≤ r s , then E i will be removed from the current schedule; otherwise E i will survive in its present position. The days and nights that the nurses' shift pattern covers are then released and updated for the next Reconstruction step (see below). By using this step, an assignment E i with a larger fitness value F(E i ) has a proportionally higher probability of survival in the current schedule. This mechanism performs in a similar way to roulette wheel selection in genetic algorithms.

Elimination-II
Following the Elimination-I step, the shift pattern of each remaining nurse still has a chance to be eliminated from the partial schedule at a given rate of r m . The days and nights that an eliminated shift pattern covers are then released for the next Reconstruction step. As usual for mutation operators, compared with the elimination rate in the Elimination-I step, the rate here should be relatively smaller to facilitate convergence. Otherwise, there will be no bias in the sampling, leading to a random restart type algorithm. From a series of experiments we found that r m ≤5.0% yields good results and hence is the value adopted by us for our experiments. This process is analogous to the mutation operator in a genetic algorithm. Note that our method uses its Elimination-II step to eliminate some fitter components and thus generate a new diversified solution indirectly.

Reconstruction
The Reconstruction step takes a partial schedule as the input, and produces a complete schedule as the output. Since the new schedule is based on iterative improvement from the previous schedule, all shift assignments in the partial schedule should remain unchanged. Therefore, the Reconstruction task is reduced to assigning shift patterns to all unscheduled nurses to complete a partial solution.
Based on the domain knowledge of nurse rostering, there are many rules that can be used to build schedules. For example, Aickelin and Dowsland (2003) introduce three building rules: a 'Cover' rule, a 'Contribution' rule and a 'Combined' rule. Since the last two rules are quite similar, in this paper we only apply the 'Cover rule and the 'Combined' rule to fulfil the Reconstruction task.
The 'Cover' rule is designed to achieve the feasibility of the schedule by assigning each unscheduled nurse the shift pattern that covers the largest number of uncovered shifts. For instance, assume that a shift pattern covers Monday to Friday night shifts. Further assume that the current requirements for the night shifts from Monday to Sunday are as follows: (-4, 0, +1, -3, -1, -2, 0), where negative symbol means undercover and positive means over-cover. The given shift pattern hence has a cover value of 3 as it covers the night shifts of Monday, Thursday and Friday. Note that for nurses of grade s, this rule only counts the shifts requiring grade s nurses as long as there is a single uncovered shift for this grade. If all shifts of grade s are covered, shifts of grade (s-1) are counted. This operation is necessary as otherwise higher graded nurses might fill lower graded demand first, leaving the higher graded demand unmet.
The 'Combined' rule is designed to achieve a balance between solution quality and feasibility by going through the entire set of feasible shift patterns for a nurse and assigning each one a score. The one with the highest (i.e. best) score is chosen. If there is more than one shift pattern with the best score, the first such shift pattern is chosen. The score of a shift pattern is calculated as the weighted sum of the nurse's preference cost p ij for that particular shift pattern and its contribution to the cover of all three grades. The latter is measured as a weighted sum of grade one, two and three uncovered shifts that would be covered if the nurse worked this shift pattern, i.e. the reduction in shortfall. More precisely and using the same notation as before, the score S ij of shift pattern j for nurse i is calculated as 3 14 S ij  w p (100  p ij )   w s q is (  a jk e ks ) , where w p is the weight of the nurse's preference cost p ij for the shift pattern and w s is the weight of covering an uncovered shift of grade s. q is is 1 if nurse i is of grade s or higher, 0 otherwise. a jk is 1 if shift pattern j covers day k, 0 otherwise. e ks is the number of nurses needed to at least satisfy the demand R ks if there are still nurses in shortage during period k of grade s, 0 otherwise.
(100−p ij ) must be used in the score, as higher p ij values are worse and the maximum for p ij is 100.
Using the above two rules at the rates of p 1 and p 2 respectively, the Reconstruction step assigns shift patterns to all unscheduled nurses until the partial solution is complete. In addition, to avoid stagnation at local optima, randomness needs to be introduced into the Reconstruction steps. This is achieved by allowing each unscheduled nurse to have an additional small rate p 3 to be scheduled by a randomly-selected shift pattern. Note that the sum of p 1 , p 2 and p 3 should be 1. Also note that because we solve the problem without relying on any prior knowledge about which nurses should be scheduled earlier and which nurses later, the indexing order of nurses given in the original data set will be applied throughout the Reconstruction step.
After a partial solution is repaired, the fitness of this complete solution has to be calculated.
Unfortunately, due to the highly-constrained nature of the problem, feasibility cannot be guaranteed. Hence, the following penalty function approach is used to evaluate the solutions obtained n m 14 g  n m   where constant w demand is the penalty per uncovered shifts in the solution, and a "max" function is used due to the penalization of undercovering.

Computational Results
This section describes the computational experiments used to test our proposed algorithm. For all experiments, 52 real data sets (as provided by the hospital) are available. Each data set consists of one week's requirements (i.e. 14 time periods) for all shift and grade combinations and a list of nurses available together with their preference costs p ij and qualifications. Typically, there will be between 20 and 30 nurses per ward, 3 grade-bands and 411 different shift patterns.
They are moderately sized problems compared to other problems reported in the literature . The data was collected from three wards over a period of several months and covers a range of scheduling situations, e.g. some data instances have very few feasible solutions whilst others have multiple optima. A zip file containing all these 52 instances is available to download at http://www.cs.nott.ac.uk/~jpl/Nurse_Data/NurseData.zip.  IP: optimal or best-known solutions found by XPRESS MP, a commercial integer programming solver (Dowsland and Thompson 2000);

Algorithm Details
 GA-1: basic genetic algorithm reported in (Aickelin and White 2004);  GA-2: adaptive GA, which is the same as GA-1, but it also tries to self-learn good parameters during the runtime starting from the values given below (Aickelin and White 2004);  GA-3: multi-population genetic algorithm, which is the same as GA-2, but also features  For all data instances, we used the following set of fixed parameters in our experiments:  Stopping criterion: a maximum iteration of 50,000, or an optimal/best-known solution has been found;  Rate of Elimination-II in Section 3.4: r m =0.05;  Rates of Reconstruction in Section 3.5: p 1 =0.80, p 2 =0.18, p 3 =0.02;  Weight set in formula (9): w p =1, w 1 =8, w 2 =2 and w 3 =1;  Penalty weight in fitness function (10): w demand =200.
Note that some parameter values (i.e. the maximum number of iterations, r m , p 1 , p 2 and p 3 ) are based on our experience and intuition and thus we cannot prove they are the best for each instance. The rest of the values (i.e. w p , w 1 , w 2 , w 3 and w demand ) are the same as those used in previous papers for solving the same 52 instances, and we are continuing to use them for consistency.
Our method was coded in Java 2, and all experiments were undertaken on a Pentium 4 2.1GHz machine under Windows XP. To test the robustness of the proposed algorithm, each data instance was run twenty times by fixing the above parameters and varying the pseudo random number seed at the beginning. The execution time per run and per data instance varies from several milliseconds to 20 seconds depending on the difficulty of the individual data instance. Table 2 lists the average runtimes of various approaches over the same 52 instances: the first six (i.e. IP, GA-1, GA-2, GA-3, GA-4 and GA-5) were run on a different Pentium III PC, while the following two (i.e. EDA and LCS) on a similar Pentium 4 2.0GHz PC. Obviously, the IP is much slower than any of the above meta-heuristics. Among these meta-heuristic methods, our algorithm takes no more time although an accurate comparison in terms of runtime is difficult due to the different environments (i.e. machines, compilers and programming languages) in use. For example, the genetic algorithms are coded in C and the EDA is coded in C++. The comparison in terms of the number of evaluations is also difficult because the other algorithms evaluate each candidate solution as a whole, while our algorithm evaluates partial solutions as well.  10,000-20,000-30,000-50,000-100,000. Its last five columns contain the results of evaluating individual parts of CHEE, namely "Elimination-I (r s =0.5) + Elimination-II + Con-heu", "Elimination-I + Con-heu", "Elimination-II + Con-heu", "Elimination I + Elimination II + Conwith-Cover-only" and "Elimination I + Elimination II + Con-with-Combined-only".

Analysis of Results
The results of all the approaches in Table 1 are obtained by using the same 52 benchmark test instances, with the bold figure representing the optimal solution found by a commercial software package. Compared with the results of the mathematical programming approach which can take up to 24 hours runtime (shown in the 'IP' column), our results (shown in the 'Best' column) are only 2.7% more expensive on average but they are all achieved within 20 seconds. Compared with the best results of various meta-heuristic approaches, in general the CHEE results are slightly better than those of the best-performing indirect genetic algorithm (with a relative percentage deviation value of 4%) and are much better than the others (with deviation values from 10% to 278%). A student's t-test (where "255" is used instead of "N/A") also suppose the observations: considering the best values for 52 instance, CHEE performs better tan GA-1, GA-2, GA-3, EDA and LCS within a confidence interval of 95%.
Since our proposed methodology uses a 'Cover' rule and a 'Combined' rule in its Reconstruction step for schedule repairing, it may be interesting to know if the good performance of our algorithm is mainly due to these two delicate building rules. To clarify this, we performed an additional set of experiments by skipping the two elimination steps, i.e. only implementing the Reconstruction step to build a schedule from an empty solution. This method does not yield a single feasible solution for 24 instances, as the 'Con-heu' column shows. This underlines the difficulty of this problem, and most importantly it underlines the key roles played by the two elimination steps in our full methodology, as the Reconstruction step alone is not capable of solving the problem. Table 2 shows the effect of the maximum number of iterations and the effect of each method with different parameter setting to the proposed CHEE. Clearly, the larger the maximum number of iterations, the better the solution quality we can obtain. Regarding the five methods within CHEE, "Elimination-I + Con-heu" performs best (with an average value of 52.0), "Elimination-I (r s =0.5) + Elimination-II + Con-heu" performs second, "Elimination-II + Con-heu" performs third, "Elimination I + Elimination II + Con-with-Cover-only" performs fourth and "Elimination I + Elimination II + Con-with-Combined-only" performs worst. However, even the best performing "Elimination-I + Con-heu" method is much worse than a standard full CHEE method introduced in Section 3 (with an average value of 29.9). These results reveal that all the parts of CHEE are important to deliver high quality solutions and none of them could be freely removed.  Table 1). The bars below the y-axis represent the number of times the algorithm failed to find a feasible solution in these 20 runs (i.e. the value of 'Inf' in Table 1). Hence, the smaller the area below the y-axis and the larger the area above, the better the algorithm's performance. Note that 'missing' bars mean that, over 20 runs, feasible solutions are obtained at least once, but none of them are optimal or of good quality (within 3 units of optimal values).  Table 1 in graphical format and provides an overall performance comparison between our proposed methodology and the other approaches. The best results for these instances are obtained by the IP software, and in general, our approach performs better than the previous best-performing meta-heuristic approach. The basic genetic algorithm (i.e. GA-1), the adaptive genetic algorithm (i.e. GA-2), the multi-population genetic algorithm (i.e. GA-3) and even the hill-climbing genetic algorithm (i.e. GA-4) which includes multiple populations and an elaborate local search are all significantly outperformed in terms of feasibility, best and average results.
The other three approaches (i.e. the GA-5, the EDA and the LCS) belong to the class of indirect approaches, in which a set of heuristic rules, including the 'Cover' rule and the 'Combined' rule used in our approach, is used for schedule building. Compared with the EDA F e a s ib ility/S o lu tio nC o st and the LCS, our new approach performs much better in terms of the best and average results, and slightly worse in terms of feasibility. Compared with the GA-5 which performs best among all the heuristic algorithms, our approach performs better in all aspects of feasibility (99% vs. 95%), best results (21.7 versus 22.0) and average results (28.6 vs. 35.6). In addition, it is worth mentioning that the GA-5 uses the best possible order of the nurses (which, of course, has to be found) for the greedy heuristic to build a schedule, while our algorithm only uses a fixed indexing ordering given in the original data sets.

Conclusions
This paper presents a new approach to address the hospital personnel scheduling problem. The major idea behind this method is to decompose a solution into components, and then to implement two evolutionary-based elimination strategies on these components to make iterative improvements in each single schedule. In each iteration, an unfit portion of the solution is removed. Any partial solution is repaired by a refined greedy building process.
Taken as a whole, the proposed approach has a number of distinct advantages. Firstly, it is simple and easy to implement because it uses greedy algorithms and local heuristics. Secondly, due to its features of maintaining only a single solution at each iteration and eliminating inferior parts from this solution, it can quickly converge to local optima. Thirdly, the technique has the ability to jump out of local optima in an effective manner. Finally, this approach can be easily combined with other meta-heuristics to achieve its peak performance on solution quality if CPU-time is not the major concern. For example, tabu search can be used in the Reconstruction step to explore the neighbouring solutions in an aggressive way and avoid cycles by declaring attributes of visited solutions as tabu. In addition, simulated annealing could be used as the acceptance criteria for the resulting solutions after Reconstruction to accept not only improved solutions as in the current form, but also worse ones with a certain level of probability.