A Multiobjective Computation Offloading Algorithm for Mobile-Edge Computing

In mobile-edge computing (MEC), smart mobile devices (SMDs) with limited computation resources and battery lifetime can offload their computing-intensive tasks to MEC servers, thus to enhance the computing capability and reduce the energy consumption of SMDs. Nevertheless, offloading tasks to the edge incurs additional transmission time and thus higher execution delay. This article studies the tradeoff between the completion time of applications and the energy consumption of SMDs in MEC networks. The problem is formulated as a multiobjective computation offloading problem (MCOP), where the task precedence, i.e., ordering of tasks in SMD applications, is introduced as a new constraint in the MCOP. An improved multiobjective evolutionary algorithm based on decomposition (MOEA/D) with two performance enhancing schemes is proposed: 1) the problem-specific population initialization scheme uses a latency-based execution location (EL) initialization method to initialize the EL (i.e., either local SMD or MEC server) for each task and 2) the dynamic voltage and frequency scaling-based energy conservation scheme helps to decrease the energy consumption without increasing the completion time of applications. The simulation results clearly demonstrate that the proposed algorithm outperforms a number of state-of-the-art heuristics and metaheuristics in terms of the convergence and diversity of the obtained nondominated solutions.

face recognition, augmented reality, natural language processing, etc. [1], [2]. Nowadays, although SMDs are becoming increasingly powerful in terms of computing capability, they are, however, still not able to support computing-intensive applications. On the one hand, SMDs with limited computing capability may cause high latency, thus failing to meet the required Quality-of-Service (QoS) demand. On the other hand, high battery consumption by computing-intensive applications may also significantly degrade the Quality of Experience (QoE) for end users.
With more computing and storage resources in cloud servers, mobile cloud computing (MCC) [3] has been envisioned as a potential solution to deal with the above-mentioned problems. MCC migrates computational tasks to cloud servers, thus to reduce the computational burden and energy consumption of local SMDs. This is referred to as the computation offloading problem (COP). Nevertheless, cloud servers are usually geographically faraway from SMDs, resulting into high transmission delay and low response speed. Obviously, MCC is not suitable for scenarios involving delay-sensitive applications, as QoE cannot be properly guaranteed. Actually, the computation offloading in MCC is only suitable for delaytolerant and computation-intensive applications, such as online social networks, mobile e-commerce, remote learning, etc. On the other hand, mobile-edge computing (MEC) relocates cloud computing resources to the edge of networks in close proximity to SMDs, ensuring lower end-to-end delay and faster response [4]- [6]. The computation offloading in MEC is more appropriate for supporting delay-sensitive and computationintensive applications, such as virtual reality, autonomous driving, interactive online games, and so on. With the demand for delay-sensitive applications ever increasing, it is hence more practical to study the COP problem in MEC.
In general, MEC servers are lightweight regarding the computing capability, because their economic and scalable deployment should be considered. It is thus not feasible to offload all computational tasks from SMDs to the MEC severs. More data transmission over communication channels also leads to a higher transmission delay. To avoid overloading, SMDs should offload the appropriate amount of computational tasks to MEC servers, which also helps to reduce the battery consumption of SMDs.
In MEC, the completion time of applications and the energy consumption of SMDs conflict with each other. In other words, improving one of them would deteriorate the other. The computational problem of reasonably offloading tasks between 2327-4662 c 2020 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
SMDs and MEC servers, i.e., COP, has become one of the most challenging research topics in the area of MEC.
In this article, we model the computation offloading in MEC as a multiobjective COP (MCOP). A multiobjective evolutionary algorithm based on decomposition (MOEA/D) is adopted for solving it [7].
The main contributions are summarized as follows. 1) An MCOP problem in the MEC environment is modeled, where the average completion time (ACT) of applications and the average energy consumption (AEC) of SMDs are defined as two objectives. On each SMD, only one application, with an ordered list of tasks, runs at a time. To our knowledge, this is the first model in MEC considering the task-precedence constraints within each application in MCOP. 2) An improved MOEA/D algorithm with two performance-enhancing schemes, namely, MOEA/D-MCOP is proposed. The first scheme, a problem-specific population initialization (PSPI) scheme generates a set of high-quality solutions to MCOP, where a latencybased execution location initialization (LELI) method is designed to determine the initial execution location (EL) (i.e., local SMD or MEC server) for each task, guiding the exploration toward promising regions in the search space. The second scheme, a dynamic voltage and frequency scaling-based energy conservation scheme, aims at reducing the energy consumption of SMDs. 3) For the new MCOP built in this article, there exists no benchmark instance in the literature. A set of test instances are thus generated to verify the performance of the proposed MOEA/D-MCOP, and presented to the research community for further investigations on this emerging topic. The simulation results clearly demonstrate that the proposed algorithm obtains high-quality nondominated solutions and outperforms a number of state-of-the-art MOEAs and heuristic algorithms against several evaluation criteria. The remainder of this article is organized as follows. The related work is introduced in Section II. In Section III, we present the MEC system model and formulate the biobjective MCOP problem. Section IV briefly reviews the multiobjective optimization problem (MOP) and the original MOEA/D. The proposed MOEA/D-MCOP is explained in Section V. The simulations and performance analysis are discussed in Section VI. Section VII presents the conclusion and future work.

II. RELATED WORK
The COP problem has received an increasing research attention from both academia and industry [8]. In general, completion time and energy consumption are considered as typical criteria for COP performance evaluation, i.e., as objectives of minimizing the completion time, minimizing the energy consumption, and minimizing both of them at the same time.
When an SMD offloads computing-intensive tasks to a MEC server, the completion time is one of the important criteria for QoE evaluation. Liu et al. [9] adopted the Markov decision process to determine ELs for tasks. A transmission policy was devised based on the queueing state of task buffer, the transmission unit state, and the local processing unit state. The ACT of tasks was minimized by an efficient 1-D search algorithm. Mao et al. [10] developed a green MEC system with energy harvesting and proposed a low complexity online algorithm, i.e., Lyapunov optimization-based dynamic computation offloading algorithm (LODCO) to reduce the execution latency by jointly determining the offloading decision, the CPUcycle frequency, and the transmission power. Yang et al. [11] investigated the scheduling problem of multiuser computing partitioning and cloud resource computing offloading. The ACT of multiple users, rather than a single user, was minimized by an offline heuristic algorithm. Dinh et al. [12] took both fixed and elastic CPU frequencies of SMDs into account. A semidefinite relaxation (SDR)-based approach was proposed to minimize the execution time of all tasks.
Energy consumption of SMDs is also a main concern in COP. Muñoz et al. [13] proposed a framework to jointly optimize the usage of computational and radio resources, where multiple antennas were used in SMDs and femto access points. The energy consumption was minimized by optimizing the communication time and the amount of data offloaded to a femto access point. Tong and Gao [14] aimed at obtaining a tradeoff between the energy consumption of SMDs and QoS of applications. An application-aware wireless transmission scheduling algorithm was presented to minimize the energy consumption, subject to the application deadline. Masoudi et al. [15] considered three practical constraints, i.e., the backhaul capacity, the maximum tolerable delay, and the interference level. They proposed a joint power allocation and decision-making algorithm to minimize the power consumption of SMDs. Wang et al. [16] presented an integrated framework for computation offloading and interference management, where the physical resource block, the computation offloading decision, and the computation resource allocation were taken into consideration for reducing energy consumption. Mahmoodi et al. [17] modeled the COP as a linear optimization problem on energy consumption, where the communication delay, the overall application execution time, and the component precedence ordering were taken into account. Xu et al. [18] proposed an energy-aware computation offloading scheme, where simple additive weighting and multiple criteria decision marking were used to determine an optimal solution. In [19], an energy-efficient COP problem in 5G MEC was investigated, considering fronthaul and backhaul links. The overall energy consumption was minimized by an artificial fish swarm algorithm, subject to the completion time demand. In [20], a security and energy-efficient computation offloading scheme based on genetic algorithm was presented. Guo and Liu [21] formulated a cloud-MEC collaborative COP. The authors presented an approximation collaborative computation offloading scheme to minimize the energy consumption of all mobile devices. Zhang and Wen [22] proposed a collaborative task execution scheduling algorithm to solve the delay-constrained workflow scheduling problem in MCC. The energy consumption of SMDs was minimized, with the application delay deadline satisfied. Guo et al. [23] studied an energy-efficient computation offloading management scheme in MEC with small cell networks. A hierarchical GA and PSO-based computation algorithm was developed to minimize the energy consumption of all mobile devices. Kuang et al. [24] formulated a multiuser offloading game problem in the OFDMA communication system. The authors presented an offloading game mechanism to maximize the number of energy-saving devices, including a beneficial offloading threshold algorithm and a beneficial offloading group algorithm. It minimized the energy cost while considering the application's deadline and risk probability. Lin et al. [25] applied dynamic voltage and frequency scaling (DVFS) to minimize SMD energy consumption in the MCC environment, where task-precedence requirements within any application were satisfied. However, the authors assumed that there was a single SMD in their MCC system, which was impractical. In real-world applications, multiple SMDs are active at the same time, and some of them may offload their computation tasks to cloud.
On the one hand, a smaller completion time requires more tasks to be executed on local SMDs, which leads to higher battery consumption. On the other hand, to keep the battery consumption at a lower level requires more computations to be offloaded to the edge. Some researchers hence treated the completion time of applications and the energy consumption of SMDs as equally important, i.e., minimizing them simultaneously. Zhang et al. [26] considered single and multicell MEC network scenarios, and proposed an integrated framework for computation offloading and resource allocation. An iterative search algorithm was developed to strike a balance between execution time and energy consumption. Peng et al. [27] developed an optimal task scheduling scheme for SMDs using the DVFS technology and the whale optimization algorithm. Considering the operating CPU-cycle frequency, the task execution position and sequence, this scheme could optimize both of the objectives simultaneously. Guo et al. [28] studied an energy-efficient COP subject to the application execution latency. An energy-efficient dynamic offloading and resource scheduling (eDors) scheme was proposed to reduce the execution latency and the energy consumption. Wang et al. [29] modeled an energy-efficient M/M/n-based COP with both of the objectives. A distributed algorithm considering transmission power allocation, strategy selection, and clock frequency control was proposed. Cui et al. [30] investigated the tradeoff between the completion time and the energy consumption subject to end user requirements, and presented an improved fast and elitist nondominated sorting genetic algorithm (NSGA-II).
In summary, considering the completion time and energy consumption as two objectives represents one of the main streams in the current research on MEC computation offloading. To the best of our knowledge, however, task precedence has not been considered in the existing MCOPs. This presents a practical constraint in many applications. For example, in any face recognition system, object detection cannot be launched before the completion of video/image collection. This motivates us to model a new MCOP with a realistic task-precedence constraint. Most of the existing algorithms for MCOP evaluate solutions using the weighted sum of multiple objectives. The fact that the objectives conflict with each other in MCOP has been omitted, thus a single solution cannot be optimized against all objectives. In research, NSGA-II has thus been employed to solve MCOP [30]. As a multiobjective evolutionary algorithm (MOEA), NSGA-II has unveiled promising advantage, i.e., providing a set of nondominated solutions for decision making in a single run. Nevertheless, as we observe in this article, NSGA-II not only is likely to be stuck into local optima but also converges slowly. MOEA/D decomposes an MOP into a number of scalar optimization subproblems (SOSPs) and solves each of them at the same time. It has been reported that MOEA/D achieves better optimization performance with lower computational overhead, compared with NSGA-II [31]- [33]. This motivates us to investigate MOEA/D to address the newly modeled MCOP in this article.

A. System Overview
A MEC system consists of one macro eNodeB node (MeNB) and a set of small eNodeB nodes (SeNBs) [26], as shown in Fig. 1. MeNB is equipped with a MEC server capable of executing multiple computing-intensive tasks in parallel. The MEC server can dynamically allocate its computing resources to execute tasks offloaded from different SMDs. All SeNBs are connected to MeNB via wired lines. Each SeNB forms a small cell, connecting to a set of SMDs via wireless channels.
Each task in an application can be run either on local SMDs or the MEC server. Computation offloading incurs when some tasks are offloaded from SMDs to the MEC server, and the data delivery relies on relay of SeNB and MeNB.
There are two commonly used structural representation methods to denote an application, namely, the graph-based method [34] and the language-based method [35]. In particular, the graph-based method includes directed acyclic graph (DAG) [20], [25], [27], [28], [36] and Petri Net [37]. DAG-based model is one of the most popular methods. Therefore, each application on an SMD is modeled as a DAG task structure. Fig. 2 shows an example DAG of an application.
In this article, time sharing is adopted in the MEC system and the minimum time unit is referred to as time interval (e.g., several seconds). We assume that in any time interval, for any SMD, there is only one application being executed. However, any SMD is allowed to run different applications in different time intervals, enabling the co-existence of multiple applications.

B. System Model
For the MeNB, its associated MEC server is with a computing capability of F. Denote the ith SeNB as π i , i = 1, . . . , S, where S is the number of SeNBs. Let N SMD i and U i,j be the number of SMDs and the jth SMD in the ith small cell associated with π i , respectively.
Denote the application to be executed on SMD U i,j by a DAG G i,j = (V i,j , E i,j ), where V i,j and E i,j are the task and precedence constraint sets, respectively, i = 1, . . . , S, and j = 1, . . . , N SMD i . An application is also referred to as a task graph G i,j , which is composed of N i,j tasks, where N i,j = |V i,j |.
Let v i,j,k ∈ V i,j be the kth task in task set V i,j , k = 1, . . . , N i,j . Edge e(v i,j,k , v i,j,l ) ∈ E i,j defines the task-precedence constraint from task v i,j,k to task v i,j,l , meaning v i,j,l cannot be executed until v i,j,k is completed.
Let pre(v i,j,k ) and suc(v i,j,k ) be the sets of the immediate predecessors and successors of task v i,j,k , respectively. For a task graph G i,j , denote the start and end tasks by v i,j,start and v i,j,end , respectively. Taking the task graph in Fig. 2 as an example, for task v 6 , its associated sets of immediate predecessors and successors are pre(v 6 ) = {v 2 , v 3 , v 4 } and suc(v 6 ) = {v 7 }; v 1 and v 7 are the start and end tasks, respectively.
Each task v i,j,k is modeled as a 3-tuple set v i,j,k = (c i,j,k , d i,j,k , o i,j,k ), where c i,j,k is the number of CPU cycles required to perform v i,j,k , and d i,j,k and o i,j,k are the input and output data sizes of v i,j,k , respectively. The input data of v i,j,k include the input parameters, the program code, and the output data generated by all its immediate predecessors in pre(v i,j,k ). The main notations used in this article are summarized in Table I.

C. Communication Model
When a task is selected for offloading, its associated input data are transmitted to the MEC server via SeNB and MeNB. The transmission delay from SeNB to MEC server via wired connections is usually trivial, thus is ignored. The transmission delay between the corresponding SMD and SeNB is considered in the model. Let B total and N channel be the total bandwidth and the number of channels offered by the MEC system, respectively. Each channel is of a bandwidth ω = B total /N channel . Each SMD uses one of the N channel channels for data offloading. To guarantee that all SMDs within the same small cell can perform independent computation offloading, it is assumed that N channel is no less than the maximum number of SMDs allowed in a small cell. If two SMDs from neighboring small cells use the same channel to transmit data, interference occurs and the transmission rate is reduced.
According to the Shannon-Hartley theorem, it is possible that no errors occur in a channel with limited bandwidth and Gaussian white noise interference if transmitting information at the theoretical maximum transmission rate. In this article, we set the achievable uplink transmission rate in a channel to the theoretical maximum transmission rate of that channel. Assume each wireless channel is symmetric, i.e., the achievable uplink and downlink transmission rates are the same. When SMD U i,j offloads tasks to the MEC server, the achievable uplink transmission rate R i,j is calculated using where ω is the channel bandwidth. p txd i,j is the power consumption of U i,j when tasks are offloaded to the MEC server. g i,j is the channel gain between SMD U i,j and SeNB π i . σ 2 is the noise power. I i,j is the interference parameter associated with SMD U i,j indicating how severe the channel sharing is, as defined in the following equation: where λ (i,j),(l,k) ∈ {0, 1} is a channel sharing coefficient. λ (i,j),(l,k) = 1 represents that the same channel is being shared by both U i,j and U l,k , and λ (i,j),(l,k) = 0 otherwise. p txd l,k is the power consumption of U l,k when offloading tasks, and g i,(l,k) is the channel gain between U l,k and π i , where U l,k is associated with SeNB π l , l, i ∈ {1, . . . , S} and l = i.

D. Local Computing
In this article, the local computing model is based on the local scheduling model in MCC [25], where the DVFS technique is enabled. For each SMD in the MEC system, assume there are H heterogeneous cores in its processor. This enables the processor execute its tasks in parallel if there are no task-precedence constraints among them. All cores are DVFS enabled, allowing each core to run at different frequency levels at different times. An arbitrary SMD U i,j can be defined as a 4-tuple set is the maximum computing frequency on the hth core of U i,j , h = 1, . . . , H, p max i,j,h is the maximum power consumption when the hth core is working at frequency f max i,j,h , p txd i,j is the power consumption of U i,j when offloading tasks, and p rxd i,j is the power consumption of U i,j when receiving data from its associated SeNB.
Assume there are M frequency scaling factors, i.e., α 1 , . . . , α M , for an arbitrary core in an arbitrary SMD, where 0 < α 1 < · · · < α t < · · · < α M = 1 [25]. The actual computing frequency that the hth core of SMD U i,j is working at can be defined as The actual power consumption of the hth core of U i,j , p actual i,j,h , is equal to where γ is a constant in the range of [2, 3] and a i,j,h is a coefficient associated with the chip structure. Let depends on the number of CPU cycles required to perform v i,j,k , c i,j,k , and the maximum computing frequency, f max i,j,h . Then, the actual execution time of v i,j,k on the hth core T SMD,h If task v i,j,k is executed locally on the hth core of SMD U i,j , its actual energy consumption E actual SMD,h (v i,j,k ) is obtained by (4), according to [25] is the maximum energy consumption if task v i,j,k is executed on the hth core of SMD U i,j at the maximum computing frequency.
Before executing task v i,j,k , execution of all its immediate predecessors must be completed. If v i,j,k is to be launched on a core of SMD U i,j , the ready time for executing it is related to the completion times of its immediate predecessors in pre(v i,j,k ). That is, the ready time for executing v i,j,k , RT exe SMD (v i,j,k ), depends on the maximum completion time of all tasks in pre(v i,j,k ). Assume v i,j,l ∈ pre(v i,j,k ) is an immediate predecessor of v i,j,k which can be executed on either the local SMD or the MEC server. Let CT exe where CT exe If task v i,j,k is selected to run on a core of U i,j , its execution cannot start before its ready time RT exe , because that core may be busy with executing other tasks at that time. Let the start time for executing v i,j,k denoted by ST exe [25].

E. Edge Computing
The edge computing model in this article is based on the cloud scheduling model in [25]. However, the authors assume there is only one active SMD in the MCC network, which is not realistic. We assume there are multiple SMDs in the MEC system, which mimics the demands from the real world.
To model the offloading task v i,j,k to the MEC server, let RT txd SMD (v i,j,k ) be the ready time for transmitting v i,j,k from U i,j via wireless channel. If v i,j,k is to be offloaded to the MEC server, the ready time for transmitting it, RT txd SMD (v i,j,k ), depends on the maximum completion time of all tasks in where CT exe where d i,j,k and R i,j are the input data size of v i,j,k and the achievable uplink transmission rate, respectively. If task v i,j,k is offloaded to the MEC server, the energy consumption of U i,j for transmitting this task, E txd where c i,j,k is the number of CPU cycles required to execute v i,j,k , and F is the computing capability of the MEC server. The MEC server can start to transmit the output data of v i,j,k back to U i,j , immediately after the completion of v i,j,k . Let RT txd MEC (v i,j,k ) be the ready time for the MEC server to transmit back the output data of v i,j,k , defined as Let the time duration required to receive the output data of v i,j,k from the MEC server, denoted by T rxd where o i,j,k is the output data size after the execution of v i,j,k . In [25], the cloud scheduling model does not consider the energy consumption of SMD U i,j incurred when receiving the output data of task v i,j,k , which is not practical. In contrast, our edge computing model takes the energy consumption of receiving the output data from MEC server into consideration, which helps to accurately estimate the energy consumption. The energy consumption of U i,j for receiving the , is defined in the following equation, according to [22], [38]: Taking the task graph in Fig. 2 as an example, we briefly explain the process of local and edge computing. In the MEC system, there is one SeNB, namely, π 1 , and one SMD associated with π 1 , namely, U 1,1 . The application G 1,1 has seven tasks, i.e., v 1,1,k , k = 1, . . . , 7, to be run on U 1,1 . Suppose U 1,1 owns three cores (i.e., h = 1, 2, 3). Table II shows the execution time of each task on different cores and the actual ELs of all tasks. Let loc 1,1,k ∈ {1, 2, 3, 4} be the EL of v 1,1,k , k = 1, . . . , 7. If 1 ≤ loc 1,1,k ≤ 3, v 1,1,k is executed on the loc 1,1,k th core of U 1,1 . If loc 1,1,k = 4, v 1,1,k is offloaded to the MEC server. For simplicity, we set the time duration required to transmit each task to the MEC server, T txd SMD (v 1,1,k ) = 3, the time duration required to receive the output data of each task from the MEC server, T rxd SMD (v 1,1,k ) = 1, and the execution time of each task on the MEC server, T exe

F. Problem Formulation
The new MCOP model aims to simultaneously minimize the ACT of applications and the AEC of SMDs in the above MEC system.
Let CT exe SMD (v i,j,end ) and CT rxd SMD (v i,j,end ) be the completion time for executing the end task v i,j,end in application G i,j on SMD U i,j and that for receiving the output data of v i,j,end via wireless channel, respectively. Let ε i,j,end be a binary variable indicating if v i,j,end is executed on U i,j or the MEC server. ε i,j,end = 1 means v i,j,end is executed on U i,j , and ε i,j,end = 0 otherwise. The completion time of application G i,j on SMD U i,j , CT(G i,j ), is defined in (14), which is equal to the completion time of the end task With the obtained completion times of all applications on all SMDs, the ACT of applications in the MEC system can be calculated using where is the total number of SMDs in the MEC system.
The AEC of all SMDs in the MEC system can be obtained by where The MCOP can be defined as a biobjective MOP problem, minimizing ACT in (15) and AEC in (16) subject to all taskprecedence constraints as defined in the following equation: where, constraint C1 is the execution order (EO) constraint between two tasks, i.e., if e(v i,j,k , v i,j,l ) ∈ E i,j , task v i,j,l cannot be executed before the completion of task v i,j,k . Constraints C2 and C3 are the local task-precedence constraints, ensuring that v i,j,k cannot be executed before all its immediate predecessors are completed. Constraints C4 and C5 are the edge task-precedence constraints, indicating that v i,j,k cannot be executed before it is completely offloaded to the MEC server and all its immediate predecessors are completed on the MEC server. Constraint C6 is a computation offloading EL constraint, specifying where v i,j,k is executed, i.e., which core of U i,j or the MEC server.

A. MOP
An MOP can be defined as . . , f m (x) conflict with each other [39].
is known as Pareto optimal if no other solution in dominates it. The set of all Pareto-optimal solutions is known as the Pareto-optimal set, of which the mapping in the objective space is known as the Pareto-optimal front (PF).
There are mainly two methods to handle an MOP. One is to convert it to a single-objective optimization problem (SOP) by objective aggregation. In this case, the commonly used method is weighted sum, where each objective, e.g., the ACT and AEC in this article, is assigned a weight. However, weight values should be set in advance. Heuristics and metaheuristics (including EAs) are often used to address an SOP. By running them once, a single solution is output. If system demands change, the weight values need to be reset. Hence, the first method only obtains a compromised solution, which cannot reflect the conflicting features between objectives. The other method to tackle MOP is to use MOEAs. Any MOEA is capable of obtaining a set of nondominated solutions in a single run. These solutions reflect the Pareto-dominance relation among them. This is what a decision maker expects to know. Even if the system demands change, the nondominated solutions obtained by MOEAs are still valid. This is why MOEAs are more appropriate to address the MCOP problem.
Pareto-dominance-based MOEAs are the mainstream optimizers in the literature, such as NSGA-II. Nevertheless, they usually suffer from prematurity and local optima. On the other hand, compared with them, MOEA/D has been reported to achieve better global exploration ability with lower computational overhead [31], [40]. This motivates us to adapt MOEA/D for the new MCOP.

B. Original MOEA/D
MOEA/D has been applied to various MOPs due to its high effectiveness yet low computational cost [31]- [33], [40], [43], [44]. MOEA/D decomposes an MOP into a number of SOSPs that are simultaneously optimized in a collaborative and time efficient manner. It employs genetic operators to generate new solutions and obtain a set of nondominated solutions through an evolution process. Three basic methods have been employed in the literature for decomposition, among which the Tchebycheff method is the mostly used, and adopted in our proposed algorithm.

A. Solution Representation and Evaluation
As mentioned in Section III, for application G i,j on SMD U i,j , the ELs and the EOs of all tasks in G i,j need to be determined in the MCOP. Let V i,j = {v i,j,1 , . . . , v i,j,N i,j } denote the set of tasks in G i,j . Let L i,j = (loc i,j,1 , . . . , loc i,j,k , . . . , loc i,j,N be an SOL for all applications on all SMDs associated with SeNB π i , i = 1, . . . , S. A solution to the MCOP, x = (SOL(π 1 ), . . . , SOL(π S )), consists of all SOLs associated with all SeNBs in the MEC system. Fig. 4 shows an example solution to the MCOP.
Given a solution x, its objective function values, F(x) = (f ACT (x), f AEC (x)), can be calculated using (15) and (16) in Section III-F.

B. Problem-Specific Population Initialization Scheme
The PSPI scheme is based on two methods, including a LELI method and a commonly used EO initialization method.
1) Latency-Based Execution Location Initialization: Initial population usually has a significant impact on the optimization performance of an MOEA. An effective population initialization scheme helps to guide an MOEA toward promising areas in the search space [40]. A randomly generated initial population may have better diversity, they are, however, not always helpful for the search to quickly locate areas with high-quality solutions in exponential search spaces. In particular, for highly constrained optimization problems, misleading search directions of an MOEA might lead to serious deterioration on the optimization performance [31].
To the best of our knowledge, most of EAs for COP problems initialize ELs for all tasks in a random manner [20], [27]. For most of the small scale COP problems, random location generation helps to diversify the population and has a positive influence on the optimization performance. However, this method is no longer applicable to the highly constrained large scale MCOP problem concerned in this article, due to the large number of tasks involved and the task-precedence constraints.
The proposed LELI method decides if a task is executed locally or offloaded to the MEC server by comparing its  = (loc i,j,1 , . . . , loc i,j,N Generate a random integer randInt(1, H); 7.
Set loc i,j,k = H + 1; // edge computing average computing time if it is executed on SMD, and its task offloading time if it is executed on the MEC server. As aforementioned, a solution to the MCOP problem, x, contains all ELs of all tasks in the MEC system and the EOs among them. By reducing the completion time of each task in a greedy manner, this method reduces the completion time of each application, which also helps to reduce the ACT of all applications in the MEC system, i.e., ACT.
Let T avg SMD (v i,j,k ) and T ofld MEC (v i,j,k ) be the average execution time and the task offloading time of v i,j,k , respectively. For an arbitrary application G i,j , the procedure to determine the ELs of all tasks is described as follows.
For (7), (10), and (12)]. If T avg SMD (v i,j,k ) < T ofld MEC (v i,j,k ), v i,j,k is executed on a randomly selected core of U i,j ; otherwise, v i,j,k is offloaded to the MEC server. The EL initialization for all tasks in G i,j is shown in Algorithm 1, where randInt(1, H) is an integer randomly generated in the range [1, H].
2) Random-Selection-Based Execution Order Initialization: In [20], an efficient EO initialization method based on random selection is proposed, i.e., random-selection-based EO initialization (RSEOI), where a task set maintains all those tasks which are not sorted but their immediate predecessors are sorted. A task v i,j,r is randomly selected from and added to the end of EO vector O i,j . After that, v i,j,r is removed from , and its immediate successors, whose immediate predecessors are all sorted, are inserted into . Once all tasks in G i,j are sorted, the task selection process stops and O i,j is returned as the output.
By running the RSEOI method multiple times, a set of different EO vectors for G i,j can be obtained. RSEOI is thus incorporated into the PSPI scheme to diversify the initial population. The EO initialization for all tasks in G i,j is shown in Algorithm 2.
3) Overall Procedure of the Problem-Specific Population Initialization Scheme: The PSPI scheme is based on LELI and RSEOI. The pseudocode of PSPI is shown in Fig. 5, where  randInt(1, H+1) is an integer randomly generated in the range The ELs of all tasks are initialized by random EL generation in half of the initial population, and LELI (i.e., Algorithm 1) for the other half of the initial population. The  = (u i,j,1 , . . . , u i,j,N i,j ); 2. Set the sortable task set = ∅; 3. Set the sorted task set Z = ∅; 4. Set = V i,j − {v i,j,start }; 5. Set u i,j,1 = v i,j,start and Z = Z ∪ {v i,j,start }; 6. Set index = 1; // index of the current task in O i,j 7. while = ∅ do 8.

Algorithm 2 RSEOI
for Randomly select a task v i,j,r from ; 12. random EL generation introduces a certain level of population diversity while LELI provides high-quality solutions for the evolution. The EO vector associated with each application G i,j is initialized by Algorithm 2.

C. DVFS-Based Energy Conservation Scheme
For a given SMD, if a high-performance core and a lowperformance core achieve similar computing performance when executing a given task, then executing it on the latter can reduce the energy consumption. The DVFS technique can be utilized to reduce the computing frequency of highperformance cores of SMDs, for energy conservation purpose.
Recently, DVFS has been widely used as a promising power management solution to reduce energy consumption of SMDs in MCC [25], [27], [45]- [48]. However, to the best of our knowledge, there is a lack of research applying DVFS to COP and MCOP problems in MEC. As mentioned in Section III-D, there are H heterogeneous cores in each SMD, where each core can run at M different computing frequency levels. This article introduces a DVFS-EC scheme in the proposed algorithm to further decrease the energy consumption of SMDs.
In [25], a DVFS algorithm is presented for a COP in MCC. By dynamically tuning the computing frequency level of each core, this algorithm can significantly reduce the energy consumption of the associated mobile device.
The DVFS algorithm in [25] is adapted for the MCOP formulated in this article, as shown in Algorithm 3. Given

Algorithm 3 DVFS Based on SOL(G i,j )
Input: task scheduling plan associated with SOL(G i,j ). Output: new task scheduling plan with new computing frequency level assignment for local tasks. 1. for k = 1 to N i,j do 2. if 1 ≤ loc i,j,k ≤ H then // local tasks 3.
Calculate a new completion time CT exe,new SMD (v i,j,k ) if v i,j,k is executed using the tth computing frequency level; 6.
if there exists next task v i,j,next on the same core then 7. Set else if v i,j,k is the last task on this core then 9.
if v i,j,k is not end task then 11. Set else Set limit 2 = CT(G i,j ); 13.
if CT exe,new SMD (v i,j,k ) ≤ limit 1 and CT exe,new SMD (v i,j,k ) ≤ limit 2 then 14.
Assign the tth computing frequency level to v i,j,k ; 15.
Set t = t + 1; application G i,j with its SOL SOL(G i,j ), the associated computation offloading is calculated, including the start time and the completion time of v i,j,k , ST exe SMD (v i,j,k ), and CT exe SMD (v i,j,k ), and the completion time of G i,j , CT(G i,j ), according to Sections III-D-III-F. Algorithm 3 reduces the energy consumption of SMD U i,j by iteratively tuning the computing frequency levels of local cores that are used to execute task(s). The resulting task scheduling plan with a new computing frequency level assignment consumes less energy. Different from the DVFS algorithm in [25] that might lead to a higher completion time, Algorithm 3 does not require additional time for completing G i,j .
The pseudocode of the DVFS-EC scheme is shown in Fig. 6. For each application G i,j with a certain SOL(G i,j ), Algorithm 3 obtains a new computing frequency level assignment. The DVFS-EC scheme aims at reducing the AEC of all SMDs in the MEC system, i.e., AEC, which helps to improve the quality of solutions to the MCOP.  Step 1) Initialization: Step 1.1) Set EP = ∅.
Step 1.3) Generate an initial population, x 1 , . . . , x N P , by using the PSPI scheme in Section V-B and evaluate the objective functions for each solution.
Step 2) Repeat: Step 2.1) Reproduction: Apply crossover (see Algorithm 5) and mutation (see Algorithm 7) operators to generate a new solution y based on x k and x l , where k, l ∈ ϕ(i) and k = l.
Step 2.2) DVFS-EC: Apply the DVFS-EC scheme (see Section V-C) to reduce the AEC value of y, f AEC (y).
Step 2.5) Update of EP: Remove from EP those solutions dominated by y, and add y to EP if no solution in EP dominates y.
Step 3) Stopping Condition: If stopping condition is met, then stop and output EP. Otherwise, go to step 2.
In step 2.1, crossover and mutation are applied to x k and x l (two neighbors of x i ) to generate a new solution y. By combining selected portions from two parent solutions, the crossover operator is regarded as the main evolutionary force for offspring production. Offspring solutions inherit some features from their parents. Yet, they are capable of exploring new areas in the search space as long as their parents are not similar to each other.
Let x par1 and x par2 be two parent solutions. Let i,j ) be the SOLs for G i,j in x par1 and x par2 , respectively. The crossover operator is applied to each SOL par1 (G i,j ) and SOL par2 (G i,j ) pair, i = 1, . . . , S, j = 1, . . . , N SMD i , to obtain two offspring SOLs for G i,j , namely, SOL off 1 (G i,j ) and

Algorithm 4 EL and EO Crossovers on Two SOLs Associated With G i,j
Input: two parent SOLs for G i,j , e.g., SOL par1 (G i,j   i,j . According to the task graph in Fig. 2, we present an example of the EL crossover operation in Fig. 7.
In the EO crossover, all task-precedence constraints must be met. A simple crossover is very likely to produce infeasible EO vectors for each application, as repetitive tasks may be created. In [20], an effective task EO crossover operator ensures that all task-precedence constraints are always satisfied. This operator is adopted as the EO crossover in MOEA/D-MCOP, as described below.
For i,j . An example of the EO crossover operation applied to the task graph in Fig. 2 is shown in Fig. 8.
for j = 1 to N SMD i do 4.
Obtain two offspring SOLs, SOL off 1 (G i,j ) and SOL off 2 (G i,j ), by running Algorithm 4 on SOL par1 (G i,j ) and SOL par2 (G i,j ); Fig. 7. Example of the EL crossover. Fig. 8. Example of the EO crossover applied to the task graph in Fig. 2.
Based on Algorithm 4, we design the crossover operator on two parent solutions in Algorithm 5.
Mutation plays an important role in introducing diversity to the evolution. Bitwise mutation is applied to the EL vector L par i,j , and single-point mutation is applied to the EO vector O par i,j in each SOL par (G i,j ), i = 1, . . . , S, j = 1, . . . , N SMD i , to obtain an offspring SOL for G i,j , namely, SOL off (G i,j ). This is described in Algorithm 6, where random(0, 1) is a number randomly generated in the range (0, 1). A mutation probability for all applications in the MEC system MP app is used to decide if SOL (G i,j ) is mutated, i = 1, . . . , S, j = 1, . . . , N SMD i . In the EL mutation, a mutation probability for L par i,j , MP loc i,j , is adopted to decide if each EL in L par i,j is mutated. If an EL

Algorithm 6 EL and EO Mutation Procedures
Input: parent SOL for G i,j , e.g., SOL par (G i,j ) = (L par i,j , O par i,j ). Output: offspring SOL for G i,j , e.g., SOL off (G i,j  for k = 1 to N i,j do 6. Generate a random number random(0, 1); 7.
if MP loc rnd ≤ MP loc i,j then 9.
Generate a random integer randInt (1, H + 1 Randomly select a location in temp and move u par i,j,r there; 20. Set
for j = 1 to N SMD Obtain the offspring SOL SOL off (G i,j ) by running Algorithm 6 on SOL par (G i,j ); loc par i,j,k in L par i,j is chosen for mutation, a random integer number from {1, . . . , H + 1} is used to replace loc par i,j,k . After mutation, an EL vector L off i,j is generated. For the task graph in Fig. 2, an example of the EL mutation operation is presented in Fig. 9.
In the proposed MOEA/D, the EO mutation in [20] is adopted, where all task-precedence constraints are met. Let     ,N i,j ), where "•" concatenates the above produced three vectors. An example of the EO mutation operation applied to the task graph in Fig. 2 is illustrated in Fig. 10.
Based on Algorithm 6, Algorithm 7 presents how a parent solution is mutated.

E. Complexity Analysis
Let O(f ) be the time complexity for evaluating a solution to the MCOP. Let N task total and M be the total number of tasks in the MEC system and the number of the computing frequency levels on a core, respectively. Let the number of objectives and that of neighbors for subproblem represented by m and W, respectively. Let the size of the EP denoted by |EP|.
First, we analyze the complexity of each step in the loop. As the encoding length of each solution is N task total , step 2.1 (simple crossover and mutation) has a time complexity of O(N task total ). There are two operations in step 2.2 (namely, the DVFS-EC scheme), including solution evaluation and solution improvement. In the first part, O(f ) is the time complexity as defined above. In the second part, a solution is improved in terms of energy consumption. The DVFS technique is applied to each locally executed task, resulting into a time complexity of O(M). In the worst case, all tasks are executed on SMDs, which corresponds to a time complexity of O(N task total · M). Hence, step 2.

A. Test Instances
In this article, we consider a centralized MEC system with a radius of 100 m. The network parameter setup method in [30] is adopted. A system with five small cells is regarded as a medium-scale MEC scenario, which meets most of the users' requirements. Therefore, we also use the five-small-cell MEC network to conduct all experiments. The cells are evenly scattered, each with a radius of 50 m. The number of channels in the MEC system is fixed to 10 for simplicity purpose. To guarantee there is no channel interference between SMDs within any small cell, we randomly generate the number of SMDs in each small cell in the range of [3,9].
For an arbitrary SMD U i,j , the maximum computing frequency of the first core f max i,j,1 is randomly generated in the range [0.5, 1] GHz. The maximum computing frequencies of the second and third cores are set to f max i,j,2 = f max i,j,1 − 0.1 and f max i,j,3 = f max i,j,1 − 0.25, respectively. Assume the power consumption of the first, second, and third cores at the maximum computing frequency is 4, 2, and 1 W, respectively. According to [25], we assume each core has four computing frequency levels with scaling factors α 1 = 0.2, α 2 = 0.5, α 3 = 0.8, and α 4 = 1, respectively, and constant γ is set to 2. The problem characteristics are shown in Table III. For the application generation, we first randomly generate the number of tasks, and then randomly generate their data size and the number of CPU cycles required to perform them. The task-precedence constraints between tasks are randomly generated based on the task generation method introduced in [20]. To be specific, we randomly generated the number of tasks in an application using six different ranges to control the scale of the MCOP problem, as shown in Table IV. In each instance,

B. Experimental Setup
We ran the experiments on a computer with Windows 10 OS, Intel Core i7-8700 CPU 3.2 GHz, and 16-GB RAM. All algorithms were implemented using Python 3.6. The parameters of the proposed MOEA/D-MCOP are listed in Table V. The results are obtained by running each algorithm 20 times, from which the statistics are collected and analyzed.

C. Performance Measures
Four widely recognized performance metrics in [31] and [32] are used to thoroughly evaluate the performance of MOEA/D-MCOP. Let PF ref and PF known denote the reference PF approximating the true PF and the PF obtained by an algorithm, respectively.
For the new MCOP problem concerned in this article, the true PF is not known. A widely used method in the research is to collect the best so far solutions found by all algorithms in all runs and obtain the PF associated with those nondominated ones as PF ref .
1) Inverted Generational Distance: Inverted generational distance (IGD) as defined in the following equation can simultaneously measure the convergence and diversity of a given PF. For an algorithm, a smaller IGD value reflects better overall performance  PF known converges to PF ref : where |PF known | is the number of points in PF known and d(τ known , PF ref ) is the Euclidean distance between point τ known in PF known and its nearest point in PF ref .

3) Student's t-test:
In this article, two-tailed t-test with 38 • of freedom at a 0.05 level of significance [49] is utilized to compare two algorithms Algorithms 1 and 2 based on the IGD values obtained in 20 runs. The results show whether performance of Algorithm 1 is significantly better than, significantly worse than, or statistically equivalent to that of Algorithm 2, respectively.

4) Friedman Test:
The Friedman test [50] is a nonparametric test for detecting the differences among different algorithms in terms of IGD and GD. All algorithms under comparisons are ranked and their average ranks explicitly indicate how well they perform.

D. Effectiveness of Two Performance Enhancing Schemes
To demonstrate the effectiveness of the two new schemes, namely, PSPI in Section V-B and DVFS-EC in Section V-C, in the proposed MOEA/D-MCOP, the following three variants of MOEA/D are tested on the six test instances in Table IV The results of mean and standard deviation (SD) of IGD and GD are collected in Tables VI and VII, respectively. It is obvious that MOEA/D-PSPI outperforms MOEA/D against the two performance measures in all instances. This is due to that the problem-specific knowledge incorporated in the PSPI scheme is able to guide the search to start from promising areas. Moreover, MOEA/D-MCOP achieves better mean value than the other two in terms of IGD and GD in each instance. This shows that the DVFS-EC scheme helps to reduce the energy consumption of SMDs without sacrificing completion time, thus enhances the local exploitation ability of the search. Fig. 11 shows the PFs obtained by the three algorithms. It can be seen clearly that both the PSPI and DVFS-EC schemes contribute to the performance improvement of MOEA/D.

E. Overall Performance Evaluation
MOEA/D-MCOP is compared against the following eight state-of-the-art algorithms, i.e., five MOEAs and three heuristic algorithms, in six test instances in Table IV. 1) NSGA-II: The modified fast and elitist nondominated sorting genetic algorithm [30] used to achieve a tradeoff between the AEC and the ACT in a MEC network. 2) MOWOA: The multiobjective whale optimization algorithm [27] applied to address the multiobjective task workflow scheduling problem, where weighted sum is used to aggregate workflow completion time and energy consumption into one objective function. 3) MOFOA: The knowledge-guided multiobjective fruit fly optimization algorithm [51] developed to tackle the multiskill resource-constrained project scheduling problem, where the completion time and the total cost are minimized at the same time. 4) HGPCA: The hierarchical GA and PSO-based computation algorithm [23] proposed to solve the multiuser offloading game problem in MCC, where the energy consumption of SMDs is minimized. 5) MOEA/D: The MOEA based on decomposition [7] with the Tchebycheff method. 6) TSDVFS: The task scheduling with DVFS algorithm [25] developed to minimize the energy consumption of SMDs in MCC, where the application completion time constraint and the task-precedence constraints are satisfied. 7) CTESA: The collaborative task execution scheduling algorithm [22] devised to address the delayconstrained workflow scheduling problem in the MCC network. CTESA minimizes the energy consumption of SMD(s) while meeting the application completion time deadline. 8) eDors: The energy-efficient dynamic offloading and resource scheduling algorithm [28] presented to reduce the energy consumption and shorten the application completion time, where the task-dependency requirement and application completion time deadline are constrained. 9) MOEA/D-MCOP: The proposed MOEA/D with the PSPI and DVFS-EC schemes in this article. For all MOEAs under comparison, the population size and the predefined number of iterations are set to 100, respectively. To make a fair comparison, we directly adopt the parameter settings in NSGA-II [30], MOWOA [27], MOFOA [51], HGPCA [23], and MOEA/D [7]. To be specific, in NSGA-II, the crossover and mutation probabilities are set to 0.8 and 0.3,  respectively. In MOWOA, the upper and lower bounds of the search range are set to 4.4 and 0.5, respectively. For MOFOA, we set the subswarm size, the learning rate of the experience, and the number of elite fruit flies to 5, 0.1, and 3, respectively. In HGPCA, the crossover probability, the mutation probability, the inertia weight, and the acceleration instant are set to 0.6, 0.01, 0.4, and 1.5, respectively. In MOEA/D and MOEA/D-MCOP, the number of neighbors for each subproblem is set to 10.
Note that each of three heuristics only obtains a single solution after each run. To make a fair comparison, each heuristic should obtain a set of nondominated solutions for   performance comparison. Hence, we repeatedly run a heuristic with incrementally increased application completion time deadline as a constraint. Each deadline results into a solution with explicit application completion time and energy consumption. By doing so, each heuristic can obtain a set of nondominated solutions after a number of runs.
We first compare the ACT of applications, i.e., ACT, and the AEC of SMDs, i.e., AEC, obtained by the nine algorithms. Figs. 12 and 13 depict the box plots of the nine algorithms in terms of ACT and AEC, respectively. In Fig. 12, one can observe that MOEA/D-MCOP performs better than the other eight algorithms in most of the test instances (except Instances 3 and 4). This is because the PSPI scheme in MOEA/D-MCOP adopts the LELI method. By reducing the completion time of each task in a greedy manner, LELI can reduce the completion time of each application, which also helps to reduce the ACT in the MEC system.
In Fig. 13, there is no doubt MOEA/D-MCOP is the best. This is because the DVFS-EC scheme can significantly reduce the AEC by dynamically adjusting the frequency  level of each core. Besides, eDors and TSDVFS are the second-and third-best algorithms, respectively. The two algorithms decrease the energy consumption of SMDs thanks to DVFS. Meanwhile, eDors always overweighs TSDVFS with respect to the AEC. The reason behind it is that eDors takes advantage of transmission power control mechanism, which further reduces the energy consumption. However, both eDors and TSDVFS are not good at obtaining a decent tradeoff between ACT and AEC, i.e., improving one objective harms the other. On the other hand, if we consider ACT and AEC together, MOEA/D-MCOP achieves the best overall performance.
The mean and SD values of IGD and GD obtained by all algorithms are shown in Tables VIII and IX, respectively. First, if taking all algorithms into account, one can observe that MOEA/D-MCOP performs the best with respect to IGD and GD in all instances. IGD reflects the diversification and convergence of nondominated solutions simultaneously. GD reveals how far the obtained PF is from the reference PF. Results in Tables VIII and IX show that MOEA/D-MCOP always obtains the set of nondominated solutions closest to the reference PF, which indicates MOEA/D-MCOP achieves a better tradeoff between global exploration and local exploitation.
Second, it is easily seen that MOEA/D outperforms all MOEAs except MOEA/D-MCOP in almost all instances, showing that MOEA/D is highly effective for the MCOP problem. On the one hand, NSGA-II, MOWOA, MOFOA, and HGPCA are Pareto-dominance based. If parameters are not set appropriately, they are likely to get stuck into local optima and converge slowly. On the other hand, MOEA/D is decomposition-based, addressing a number of SOSPs in parallel. Compared with those Pareto-dominance-based MOEAs, MOEA/D is featured with stronger global exploration capability. Therefore, MOEA/D performs better than NSGA-II, MOWOA, MOFOA, and HGPCA. This also justifies why MOEA/D is chosen for addressing the MCOP problem.
Third, among heuristics, TSDVFS is the winner as it outperforms CTESA and eDors in most test instances except Instances 3 and 4 in terms of IGD and GD. TSDVFS first adopts the initial scheduling algorithm to generate the minimal-delay schedule. Then, it applies the DVFS technology to reduce the energy consumption of SMDs. However, TSDVFS cannot strike a balance between the application completion time and the energy consumption of SMDs. This is why TSDVFS is beaten by MOEA/D-MCOP. On the other hand, eDors and CTESA also have obvious drawbacks. eDors is not good at reducing of the application completion time. CTESA schedules the tasks on the partial critical path rather than considering the task graph as a whole.
The results of student's t-test based on IGD are shown in Table X. MOEA/D-MCOP is clearly the best among all algorithms. The Friedman test is also utilized to rank algorithm performance. Based on the IGD and GD values, the average rankings of the nine algorithms are shown in Table XI

A. Conclusion
This article models a new MCOP in the MEC environment, where two objectives, namely, the ACT of applications and the AEC of all SMDs, are minimized simultaneously. This new MCOP model, for the first time, considers the task-precedence constraints within each application in MEC, where an ordered list of tasks should be executed one by one.
To address the new problem, an improved MOEA/D with two extensions, namely, MOEA/D-MCOP is proposed. The first extension is a PSPI scheme that generates high-quality initial population. The second extension is a DVFS-EC scheme that improves the quality of a given solution by reducing the energy consumption of SMDs. The simulation results demonstrate that the proposed MOEA/D-MCOP performs better than the five state-of-the-art MOEAs and three heuristics in terms of the ACT, AEC, IGD, GD, t-test, and the Friedman test.

B. Future Work
The MCOP problem modeled in this article is a static optimization problem in the MEC network, where the number of SMDs remains unchanged and SMDs do not move during computation offloading. However, in the real world, dynamic and uncertainty are key features in MEC networks, such as mobility, ever-changing wireless channel, and number of SMDs. We will study the MCOP problem in a dynamic MEC environment, taking the three issues above into consideration. In this case, MOEA/D-MCOP cannot respond within a short time to the dynamic MEC network, especially when SMDs move quickly. Therefore, we will concentrate on developing online algorithms and models in future work, e.g., problemspecific heuristics, and deep reinforcement learning-based models.
The computing resources on MEC servers and the spectrum resources in wireless channels are both limited in the MEC environment. Therefore, it is of significance to study how the computing and spectrum resources are reasonably allocated between SMDs in MEC networks. Moreover, we will jointly consider computation offloading, resource allocation, content caching, and task-precedence constraints among tasks to meet the requirements of various applications. To be specific, we will study a centralized MEC scenario with limited computing and spectrum resources, jointly taking computation offloading, resource allocation, content caching, and task-precedence constraints into account. We will model this complicated scenario as a new MOP. There are three objectives for minimization at the same time, including the completion time of applications, the energy consumption of SMDs, and the resource cost of SMDs. The resource cost includes the cost for renting computing resources from MEC servers, and that for leasing spectrum resources from small cell. In addition, we will propose an efficient multiobjective optimization algorithm to address the problem.