To consider when GPU original tasks are mapped to CPU

with some updates in the README file
Change-Id: I7856842f5e9915e28d199e754980662d0f1108d1
Signed-off-by: Junhyung Ki <kijoonh91@gmail.com>
diff --git a/eclipse-tools/responseTime-analyzer/README.md b/eclipse-tools/responseTime-analyzer/README.md
index acbf2f7..ed9886f 100644
--- a/eclipse-tools/responseTime-analyzer/README.md
+++ b/eclipse-tools/responseTime-analyzer/README.md
@@ -8,12 +8,14 @@
 ### 3. Contribution & benefits for the community
 ### 4. Contents
 ### 5. Diagram Example
-### 6. Validation
+### 6. Instruction
+### 7. Remarks
+### 8. Updates (Phase 2: June/24 ~ July/21)
 
 # 1. Milestone with the goal of each phase
-- **Response Time Analysis_CPU Part (Part 1)**
-- E2E Latency (Part 2)
-- LET, EC, IC Communication Paradigms to the model (Part 3)
+- Response Time Analysis_CPU Part (Phase 1)
+- **Refine Previous Phase & E2E Latency Foundation (EC, IC, LET) (Phase 2: June/24 ~ July/21)**
+- Finalize LET, EC, IC and the corresponding UI part (Phase 3)
 
 # 2. Intention
 The current APP4MC library does provide several methods which are useful for deriving execution time for a task, a runnable or ticks (pure computation) through the Util package. But methods for response time are still not available. The reason is that response time analysis can be varied depending on the analyzed model so it is hard to be generalized. But since the trends are evolving from homogeneous to heterogeneous platform, the analysis methodology have become much more sophisticated so it is necessary to have CPU response time analysis which can be used for different mapping analysis with a different processing unit type (e.g., GPU).
@@ -38,7 +40,7 @@
 Sort out the given list of tasks (in order of shorter period first - Rate Monotonic Scheduling)          
 **preciseTestCPURT** (Response Time analysis Equation Explanation)          
 Calculate response time of the observed task according to the periodic tasks response time analysis algorithm.          
-> Ri = Ci + Σj ∈ HP(i) [Ri/Tj]*Cj ([a standardized response time analysis methodology](https://www.semanticscholar.org/paper/Finding-Response-Times-in-a-Real-Time-System-Joseph-Pandya/574517d6e47cf9b368003a56088651a1941dcda1)(Mathai Joseph and Paritosh Pandya, 1986))          
+> Ri = Ci + Σj ∈ HP(i) [Ri/Tj]*Cj ([a standardized response time analysis methodology](https://www.semanticscholar.org/paper/Finding-Response-Times-in-a-Real-Time-System-Joseph-Pandya/574517d6e47cf9b368003a56088651a1941dcda1)(Mathai Joseph and Paritosh Pandya, 1986))
            
 <`RuntimeUtilRTA.java`>          
 **getExecutionTimeforCPUTask**          
@@ -48,26 +50,41 @@
 **syncTypeOperation**          
 Calculate execution time of the given runnableList in a synchronous manner.          
 **asyncTypeOperation**          
-Calculate execution time of the given runnableList in an asynchronous manner.          
-**getExecutionTimeForRTARunnable**           
-Calculate execution time of the given runnable.         
+Calculate execution time of the given runnableList in an asynchronous manner.         
+**getExecutionTimeForGPUTaskOnCPU**          
+Calculate execution time of the given task which was originally designed for GPU but newly mapped to CPU by Generic Algorithm Mapping.          
+**getExecutionTimeForRTARunnable**          
+Calculate execution time of the given runnable.          
+**getTaskMemoryAccessTime**         
+Calculate memory access time of the observed task.           
 **getRunnableMemoryAccessTime**          
-Calculate memory access time of the observed runnable.          
+Calculate memory access time of the observed runnable.           
 > (Explanation)         
 > Read(Write)_Access_Time = Round_UP(Size_of_Read_Labels / 64.0 Bytes) * (Read_Latency / Frequency)        
-
 **isTriggeringTask**         
 Identify whether the given task has an InterProcessTrigger or not.          
          
 <`RTApp.java`>          
 User Interface Window           
-[APP4RTA_1.0_Description](Add Ref here)
-
-# 5. Diagram Example
-![Class Diagram](Add Ref here)
-
-* Sequence Diagram
-
-
-
-# 6. Validation
+[APP4RTA_1.0_Description](Add Ref here)('responseTime-analyzer'>'plugins'>'doc'>'APP4RTA_1.0_Description.pdf')         
+            
+# 5. Diagram Example           
+![Class Diagram](Add Ref here)('responseTime-analyzer'>'plugins'>'doc')            
+            
+# 6. Instruction            
+1. Under 'responseTime-analyzer'>'plugins'>'src'>...>'gsoc_rta' folder, there is 'CpuRTA' class. This is the implementation source file. By running them, one can derive the total sum of response times of the given model.            
+2. Under 'responseTime-analyzer'>'plugins'>'src'>...>'gsoc_rta'>'ui' folder, there is 'RTApp_WATERS19' class. This is Java Swing UI source file that corresponds to the 'CpuRTA'. This UI is created based on WATERS19 Project. By running this, one may get more detailed visuals of the result of 'CpuRTA' class.            
+   (Refer to 'APP4RTA_1.0_Description.pdf' for more details.)('responseTime-analyzer'>'plugins'>'doc'>'APP4RTA_1.0_Description.pdf')            
+            
+# 7. Remarks            
+1. 'GPU Task on CPU' part has not been implemented yet, so when T10 – T13 Tasks are mapped to CPU, the result would be not accurate for now.            
+	=> Done.            
+            
+# 8. Updates (Phase 2: June/24 ~ July/21)            
+### July/16            
+            
+<`CpuRTA`>            
+- In the previous phase, the CPU response time analysis had been done without considering the situation where GPU Tasks are mapped to CPU by the new integer array generation. This was rather inaccurate since a GPU Task contains offloading runnables which are used to copy-in and copy-out the local memory when it is mapped to GPU. Not only should these runnables be omitted, but also the labels from the triggering task should be taken into account for the GPU task that is newly mapped to CPU to access the specified memory. Therefore, a function "setGTCL(final Amalthea model)" that takes needed labels and save to a hashMap for each GPU Task has been made.            
+            
+<`RuntimeUtilRTA`>            
+- getExecutionTimeForGPUTaskOnCPU method which only considers a GPU original task's associated labels and ticks but ignores its offloading runnables.            
\ No newline at end of file
diff --git a/eclipse-tools/responseTime-analyzer/plugins/org.eclipse.app4mc.gsoc_rta/META-INF/MANIFEST.MF b/eclipse-tools/responseTime-analyzer/plugins/org.eclipse.app4mc.gsoc_rta/META-INF/MANIFEST.MF
index fcd412d..362ec44 100644
--- a/eclipse-tools/responseTime-analyzer/plugins/org.eclipse.app4mc.gsoc_rta/META-INF/MANIFEST.MF
+++ b/eclipse-tools/responseTime-analyzer/plugins/org.eclipse.app4mc.gsoc_rta/META-INF/MANIFEST.MF
@@ -7,6 +7,6 @@
 Bundle-Vendor: Eclipse APP4MC
 Require-Bundle: org.eclipse.ui,
  org.eclipse.core.runtime,
- org.eclipse.app4mc.amalthea.model
+ org.eclipse.app4mc.amalthea.model,
  org.apache.log4j
 Automatic-Module-Name: app4mc.example.tool.java
diff --git a/eclipse-tools/responseTime-analyzer/plugins/org.eclipse.app4mc.gsoc_rta/src/org/eclipse/app4mc/gsoc_rta/CommonUtils.java b/eclipse-tools/responseTime-analyzer/plugins/org.eclipse.app4mc.gsoc_rta/src/org/eclipse/app4mc/gsoc_rta/CommonUtils.java
index 895b4b8..5097077 100644
--- a/eclipse-tools/responseTime-analyzer/plugins/org.eclipse.app4mc.gsoc_rta/src/org/eclipse/app4mc/gsoc_rta/CommonUtils.java
+++ b/eclipse-tools/responseTime-analyzer/plugins/org.eclipse.app4mc.gsoc_rta/src/org/eclipse/app4mc/gsoc_rta/CommonUtils.java
@@ -13,6 +13,7 @@
  *******************************************************************************/
 package org.eclipse.app4mc.gsoc_rta;
 
+import org.apache.log4j.Logger;
 import java.io.BufferedWriter;
 import java.io.FileOutputStream;
 import java.io.OutputStreamWriter;
@@ -30,7 +31,6 @@
 import java.util.Set;
 import java.util.stream.Collectors;
 
-import org.apache.log4j.Logger;
 import org.eclipse.app4mc.amalthea.model.Amalthea;
 import org.eclipse.app4mc.amalthea.model.AmaltheaFactory;
 import org.eclipse.app4mc.amalthea.model.CallSequence;
@@ -664,8 +664,6 @@
 		return pus;
 	}
 
-	@SuppressWarnings("resource")
-	/** writes a stringbuffer into either measurements.csv or optional @param filename */
 	public static void writeCSV(final StringBuffer sbp, final String... filenamep) {
 		if (null == sbp) {
 			Logger.getLogger(CommonUtils.class).error("Nothing to write. Probably, the system is not schedulable.");
@@ -811,4 +809,4 @@
 				+ SharedConsts.onlyWrittenLabelsCE + "," + SharedConsts.ignoreInfeasibility + "\n");
 		return sbl;
 	}
-}
\ No newline at end of file
+}
diff --git a/eclipse-tools/responseTime-analyzer/plugins/org.eclipse.app4mc.gsoc_rta/src/org/eclipse/app4mc/gsoc_rta/CpuRTA.java b/eclipse-tools/responseTime-analyzer/plugins/org.eclipse.app4mc.gsoc_rta/src/org/eclipse/app4mc/gsoc_rta/CpuRTA.java
index bf7d665..4febdc0 100644
--- a/eclipse-tools/responseTime-analyzer/plugins/org.eclipse.app4mc.gsoc_rta/src/org/eclipse/app4mc/gsoc_rta/CpuRTA.java
+++ b/eclipse-tools/responseTime-analyzer/plugins/org.eclipse.app4mc.gsoc_rta/src/org/eclipse/app4mc/gsoc_rta/CpuRTA.java
@@ -20,18 +20,31 @@
 import java.util.Comparator;
 import java.util.HashMap;
 import java.util.List;
+import java.util.stream.Collectors;
 
 import org.apache.log4j.Logger;
 import org.eclipse.app4mc.amalthea.model.Amalthea;
+import org.eclipse.app4mc.amalthea.model.CallSequenceItem;
+import org.eclipse.app4mc.amalthea.model.ClearEvent;
+import org.eclipse.app4mc.amalthea.model.InterProcessStimulus;
+import org.eclipse.app4mc.amalthea.model.InterProcessTrigger;
+import org.eclipse.app4mc.amalthea.model.Label;
+import org.eclipse.app4mc.amalthea.model.LabelAccess;
+import org.eclipse.app4mc.amalthea.model.LabelAccessEnum;
 import org.eclipse.app4mc.amalthea.model.Preemption;
 import org.eclipse.app4mc.amalthea.model.ProcessingUnit;
 import org.eclipse.app4mc.amalthea.model.PuType;
+import org.eclipse.app4mc.amalthea.model.Runnable;
+import org.eclipse.app4mc.amalthea.model.SetEvent;
 import org.eclipse.app4mc.amalthea.model.Task;
+import org.eclipse.app4mc.amalthea.model.TaskRunnableCall;
 import org.eclipse.app4mc.amalthea.model.Time;
 import org.eclipse.app4mc.amalthea.model.TimeUnit;
+import org.eclipse.app4mc.amalthea.model.WaitEvent;
 import org.eclipse.app4mc.amalthea.model.io.AmaltheaLoader;
 import org.eclipse.app4mc.amalthea.model.util.FactoryUtil;
 import org.eclipse.app4mc.amalthea.model.util.RuntimeUtil.TimeType;
+import org.eclipse.app4mc.amalthea.model.util.SoftwareUtil;
 import org.eclipse.emf.common.util.EList; 
 
 /**
@@ -45,10 +58,7 @@
  *				should be the first step before executing any response time method.
  */
 public class CpuRTA {
-	public final File inputFile = new File("model-input/ChallengeModel_release.amxmi");
-	
-	/* synchronous or asynchronous offloading */
-	public boolean synchronousOffloading = false;
+	public final File inputFile = new File("model-input/WATERS19_release/ChallengeModel_release.amxmi");
 	
 	/**
 	 * Get Default IA Map
@@ -77,10 +87,8 @@
 	 * @param pAmalthea			the parameter Amalthea model which would reinitialize the Amalthea model
 	 */
 	public void setModel(final Amalthea pAmalthea) {
-		this.model = pAmalthea;
-		
-		// TODO: gpuToCpuLabels, getGTCL, setGTCL, Cumulative Latency Time 
-		//setGTCL(this.model);
+		this.model = pAmalthea; 
+		this.setGTCL(this.model);
 	}
 	
 	private HashMap<Task, Time> trt = null;
@@ -182,6 +190,7 @@
 	}
 	
 	//TODO: Contention not considered
+	
 	private List<Task> gpuTaskList = new ArrayList<Task>();
 
 	/**
@@ -195,7 +204,6 @@
 	}
 
 	private final List<Task> triggeringTaskList = new ArrayList<Task>();
-
 	/**
 	 * Since this method is used by RTARuntimeUtil, the visibility should be 'protected'
 	 *
@@ -206,8 +214,106 @@
 		return this.triggeringTaskList;
 	}
 	
-	// TODO: Consider Offloading Runnable
+	/* this runnable is used to calculate execution time in RTARuntimeUtil class */
+	protected Runnable offloadingAsyncRunnable = null;
 	
+	private final HashMap<Task, List<Label>[]> gpuToCpuLabels = new HashMap<Task, List<Label>[]>();
+	
+	/**
+	 * Since this method is used by RTARuntimeUtil, the visibility should be 'protected'
+	 *
+	 * @return
+	 * 			gpuToCpuLabels HashMap which contains required labels (of the corresponding task)
+	 * 			that need to be taken into account when GPU tasks are mapped to CPU
+	 */
+	protected HashMap<Task, List<Label>[]> getGTCL() {
+		return this.gpuToCpuLabels;
+	}
+	
+	/**
+	 * Not only set gpuToCpuLabels HashMap, this also set gpuTaskList (only contains GPU tasks),
+	 * triggeringTaskList (only contains tasks with InterProcessTrigger)
+	 * and offloadingAsyncRunnable (the Runnable that is taken into account for triggering tasks when the mode is asynchronous)
+	 *
+	 * @param model				the parameter Amalthea model which is used to derived required labels of the corresponding GPU task
+	 */
+	@SuppressWarnings("unchecked")
+	private void setGTCL(final Amalthea model) {
+		if (model != null) {
+			final EList<Task> allTaskList = model.getSwModel().getTasks();
+			this.gpuTaskList = allTaskList.stream().filter(s -> s.getStimuli().get(0) instanceof InterProcessStimulus).collect(Collectors.toList());
+			/* find the triggering tasks */
+			for (final Task t : allTaskList) {
+				final List<CallSequenceItem> triggerList = SoftwareUtil.collectCalls(t, null, 
+						(call -> call instanceof InterProcessTrigger));
+				if (triggerList.size() != 0) {
+					this.triggeringTaskList.add(t);
+					if (RuntimeUtilRTA.doesTaskHaveAsyncRunnable(t, this)) {
+						final List<CallSequenceItem> cList = SoftwareUtil.collectCalls(t, null, (call -> call instanceof TaskRunnableCall || 
+								call instanceof InterProcessTrigger || call instanceof ClearEvent || call instanceof SetEvent || call instanceof WaitEvent));
+						final int waitIndex = cList.indexOf(cList.stream().filter(s -> s instanceof WaitEvent).iterator().next());
+						final int asyncOffloadingIndex = waitIndex + 1;
+						if (cList.get(asyncOffloadingIndex) instanceof TaskRunnableCall) {
+							this.offloadingAsyncRunnable = ((TaskRunnableCall) cList.get(asyncOffloadingIndex)).getRunnable();
+						}
+					}
+				}
+			}
+			for (final Task t : this.gpuTaskList) {
+				final InterProcessStimulus ips = (InterProcessStimulus) (t.getStimuli().get(0));
+				Task triggeringTask = null;
+				for (final Task tt : this.triggeringTaskList) {
+					final InterProcessTrigger ipt = (InterProcessTrigger) SoftwareUtil.collectCalls(tt, null, 
+							(call -> call instanceof InterProcessTrigger)).stream().iterator().next();
+					if (ips.equals(ipt.getStimulus())) {
+						triggeringTask = tt;
+						break;
+					}
+				}
+				final List<Label> readLabelList = new ArrayList<Label>();
+				final List<Label> writeLabelList = new ArrayList<Label>();
+				final List<CallSequenceItem> callList = SoftwareUtil.collectCalls(triggeringTask, null, (call -> call instanceof TaskRunnableCall || 
+						call instanceof InterProcessTrigger || call instanceof ClearEvent || call instanceof SetEvent || call instanceof WaitEvent));
+				final InterProcessTrigger ipt = (InterProcessTrigger) callList.stream().filter(s -> s instanceof InterProcessTrigger).iterator().next();
+				final int indexforTrigger = callList.indexOf(ipt);
+				for (int i = 0; i < callList.size(); i++) {
+					Runnable thisRunnable = null;
+					/* Pre-processing Runnable */
+					if ((i < indexforTrigger) && (callList.get(i) instanceof TaskRunnableCall)) {
+						thisRunnable = (Runnable) ((TaskRunnableCall) callList.get(i)).getRunnable();
+						final List<LabelAccess> thisLAList = SoftwareUtil.getLabelAccessList(thisRunnable, null);
+						for (final LabelAccess la : thisLAList) {
+							if (la.getAccess().equals(LabelAccessEnum.READ)) {
+								readLabelList.add(la.getData());
+							}
+						}
+					}
+					/* Post-processing Runnable */
+					else if ((i > indexforTrigger) && (callList.get(i) instanceof TaskRunnableCall)) {
+						thisRunnable = ((TaskRunnableCall) callList.get(i)).getRunnable();
+						final List<LabelAccess> thisLAList = SoftwareUtil.getLabelAccessList(thisRunnable, null);
+						for (final LabelAccess la : thisLAList) {
+							if (la.getAccess().equals(LabelAccessEnum.WRITE)) {
+								writeLabelList.add(la.getData());
+							}
+						}
+					}
+				}
+				final List<Label>[] aryofLabelList = new ArrayList[2];
+				aryofLabelList[0] = readLabelList;
+				aryofLabelList[1] = writeLabelList;
+				this.gpuToCpuLabels.put(t, aryofLabelList);
+			}
+		}
+		else {
+			this.gpuTaskList.clear();
+			this.triggeringTaskList.clear();
+			this.offloadingAsyncRunnable = null;
+			this.gpuToCpuLabels.clear();
+		}
+	}
+	
+	// TODO: Consider Offloading Runnable
 	// TODO: Cumulative Latency Time
 	public static void main(String[] args) {
 		org.apache.log4j.BasicConfigurator.configure();
@@ -257,7 +363,6 @@
 			log.error("No PUList Loaded!");
 			return null;
 		}
-		
 		// TODO: Contention
 		Time time = FactoryUtil.createTime(BigInteger.ZERO, TimeUnit.PS);
 		for (int i = 0; i < this.tpuMapping.length; i++) {
@@ -282,7 +387,7 @@
 	 * @return
 	 * 			response time of the observed task
 	 */
-	private Time getTaskCPURT(final Task task, final TimeType executionCase) {
+	protected Time getTaskCPURT(final Task task, final TimeType executionCase) {
 		/* 1. validate thisTask is mapped to CPU */
 		final int tindex = this.model.getSwModel().getTasks().indexOf(task);
 		final int puindex = this.tpuMapping[tindex];
@@ -315,13 +420,10 @@
 		for (final Task t : taskList) {
 			stimuliList.add(CommonUtils.getStimInTime(t));
 		}
-
 		/* Sorting (Shortest Period(Time) first) */
 		Collections.sort(stimuliList, new TimeCompIA());
-		/*
-		 * Sort tasks to the newTaskList in order of Period length (shortest first
-		 * longest last)-(according to the stimuliList)
-		 */
+		/* Sort tasks to the newTaskList in order of Period length (shortest first
+		 * longest last)-(according to the stimuliList) */
 		final List<Task> newTaskList = new ArrayList<>();
 		for (int i = 0; i < stimuliList.size(); i++) {
 			for (final Task t : taskList) {
@@ -352,7 +454,6 @@
 			log.debug("!!! This taskList is empty so I am returning MAX !!!");
 			return FactoryUtil.createTime(BigInteger.valueOf(Long.MAX_VALUE), TimeUnit.PS);
 		}
-
 		/* to check if the given task is in the taskList */
 		int flag = 0;
 		int index = 0;
@@ -371,7 +472,7 @@
 		for (int i = 0; i < index + 1; i++) {
 			period = CommonUtils.getStimInTime(taskList.get(i));
 			if (index == 0) {
-				thisRT = rtaut.getExecutionTimeforCPUTask(taskList.get(i), pu, executionCase, this.trt, this);
+				thisRT = rtaut.getExecutionTimeforCPUTask(taskList.get(i), pu, executionCase, this);
 				if (thisRT.compareTo(period) <= 0) {
 					// TODO: cumulative if (thisRT.compareTo(FactoryUtil.createTime(BigInteger.ZERO, TimeUnit.PS)) > 0)
 					return thisRT;
@@ -385,7 +486,7 @@
 				if (taskList.get(i).getPreemption().equals(Preemption.COOPERATIVE)) {
 					// TODO: Blocking
 				}
-				final Time thisExeTime = rtaut.getExecutionTimeforCPUTask(taskList.get(i), pu, executionCase, this.trt, this);
+				final Time thisExeTime = rtaut.getExecutionTimeforCPUTask(taskList.get(i), pu, executionCase, this);
 				if (thisExeTime.compareTo(FactoryUtil.createTime(BigInteger.ZERO, TimeUnit.PS)) == 0) {
 					return thisExeTime;
 				}
@@ -394,11 +495,10 @@
 							+ " for task " + task.getName());
 					return FactoryUtil.createTime(BigInteger.valueOf(Long.MAX_VALUE), TimeUnit.PS);
 				}
-
 				Time culmulativeRT = FactoryUtil.createTime(BigInteger.ZERO, TimeUnit.PS);
 				/* 1. add all the execution time till the index */
 				for (int j = 0; j < i + 1; j++) {
-					final Time thisTime = rtaut.getExecutionTimeforCPUTask(taskList.get(j), pu, executionCase, this.trt, this);
+					final Time thisTime = rtaut.getExecutionTimeforCPUTask(taskList.get(j), pu, executionCase, this);
 					culmulativeRT = culmulativeRT.add(thisTime);
 				}
 				if (culmulativeRT.compareTo(period) <= 0) {
@@ -407,7 +507,7 @@
 						for (int k = 0; k < i; k++) {
 							Time localPeriod = FactoryUtil.createTime(BigInteger.ZERO, TimeUnit.PS);
 							localPeriod = CommonUtils.getStimInTime(taskList.get(k));
-							final Time preExeTime = rtaut.getExecutionTimeforCPUTask(taskList.get(k), pu, executionCase, this.trt, this);
+							final Time preExeTime = rtaut.getExecutionTimeforCPUTask(taskList.get(k), pu, executionCase, this);
 							final double ri_period = Math.ceil(culmulativeRT.divide(localPeriod));
 							excepThisExeTime = excepThisExeTime.add(preExeTime.multiply(ri_period));
 						}
@@ -451,9 +551,9 @@
 	 * @return
 	 * 			HashMap<Integer, List<Task>> puListHashMap
 	 */
-	public HashMap<Integer, List<Task>> be_getPUTaskListHashMap() {
+	public HashMap<Integer, List<Task>> be_getPUTaskListHashMap(final Amalthea model) {
 		HashMap<Integer, List<Task>> puListHashMap = new HashMap<>();
-		final EList<Task> allTaskList = this.model.getSwModel().getTasks();
+		final EList<Task> allTaskList = model.getSwModel().getTasks();
 		for(int i = 0; i < this.pul.size(); i++) {
 			final List<Task> puTaskList = new ArrayList<Task>();
 			for(int j = 0; j < this.tpuMapping.length; j++) {
diff --git a/eclipse-tools/responseTime-analyzer/plugins/org.eclipse.app4mc.gsoc_rta/src/org/eclipse/app4mc/gsoc_rta/RuntimeUtilRTA.java b/eclipse-tools/responseTime-analyzer/plugins/org.eclipse.app4mc.gsoc_rta/src/org/eclipse/app4mc/gsoc_rta/RuntimeUtilRTA.java
index 2a00f2c..4713894 100644
--- a/eclipse-tools/responseTime-analyzer/plugins/org.eclipse.app4mc.gsoc_rta/src/org/eclipse/app4mc/gsoc_rta/RuntimeUtilRTA.java
+++ b/eclipse-tools/responseTime-analyzer/plugins/org.eclipse.app4mc.gsoc_rta/src/org/eclipse/app4mc/gsoc_rta/RuntimeUtilRTA.java
@@ -24,6 +24,7 @@
 import org.eclipse.app4mc.amalthea.model.ClearEvent;

 import org.eclipse.app4mc.amalthea.model.InterProcessStimulus;

 import org.eclipse.app4mc.amalthea.model.InterProcessTrigger;

+import org.eclipse.app4mc.amalthea.model.Label;

 import org.eclipse.app4mc.amalthea.model.LabelAccess;

 import org.eclipse.app4mc.amalthea.model.LabelAccessEnum;

 import org.eclipse.app4mc.amalthea.model.ProcessingUnit;

@@ -60,7 +61,7 @@
 	 * 4. task with only Ticks

 	 * 

 	 * @param task				the observed task

-	 * @param pu				ProcessingUnit that would compute the given runnable (A57 or Denver)

+	 * @param pu				ProcessingUnit that would compute the given task (A57 or Denver)

 	 * @param executionCase		BCET, ACET, WCET

 	 * @param trt				HashMap that would contain the corresponding GPU task's response time

 	 * @param cpurta			the instance of CPURtaIA class that calls this method

@@ -68,7 +69,8 @@
 	 * @return

 	 * 			execution time of the observed task

 	 */

-	protected Time getExecutionTimeforCPUTask(final Task task, final ProcessingUnit pu, final TimeType executionCase, final HashMap<Task, Time> trt, final CpuRTA cpurta) {

+	protected Time getExecutionTimeforCPUTask(final Task task, final ProcessingUnit pu, final TimeType executionCase, final CpuRTA cpurta) {

+		Logger.getLogger(RuntimeUtilRTA.class);

 		// TODO: Contention Parameter

 		/* set the default result time variable as 0s */

 		Time result = FactoryUtil.createTime(BigInteger.ZERO, TimeUnit.PS);

@@ -95,21 +97,30 @@
 		if (isTriggeringTask(task)) {

 			/* all should be synchronous (wait should not be ignored) - active wait */

 			if (SharedConsts.synchronousOffloading == true) {

-				result = syncTypeOperation(indexforTrigger, callSequenceList, runnableList, trt, pu, executionCase, cpurta);

+				result = syncTypeOperation(indexforTrigger, callSequenceList, runnableList, pu, executionCase, cpurta);

 				/* if this task has the OffloadingAsync runnable, subtract the runnable part from the result */

-				// TODO: if (doesTaskHaveAsyncRunnable(task, cpurta))

+				if (doesTaskHaveAsyncRunnable(task, cpurta)) {

+					result = result.subtract(getExecutionTimeForRTARunnable(cpurta.offloadingAsyncRunnable, pu, executionCase));

+				}

 			}

 			/* all should be asynchronous (wait should be ignored) - passive wait */

 			else {

 				result = asyncTypeOperation(runnableList, pu, executionCase);

 				/* if this task is missing the OffloadingAsync runnable, add the runnable part to the result */

-				// TODO: if (!doesTaskHaveAsyncRunnable(task, cpurta))

+				if (!doesTaskHaveAsyncRunnable(task, cpurta)) {

+					result = result.add(getExecutionTimeForRTARunnable(cpurta.offloadingAsyncRunnable, pu, executionCase));

+				}

 			}

 		}

 		else {

 			/* GPU Origin Task on CPU & No Triggering Behavior (No InterProcessTrigger) */

 			if (!(callSequenceList.get(indexforTrigger) instanceof InterProcessTrigger)) {

-				// TODO: GPU Origin task that is newly mapped to CPU

+				/* GPU Origin task that is newly mapped to CPU */

+				if (cpurta.getGpuTaskList().contains(task)) {

+					result = result.add(getExecutionTimeForGPUTaskOnCPU(task, runnableList, pu, executionCase, cpurta));

+					// TODO: result = result.add(contention);

+					return result;

+				}

 				/* No Triggering Behavior (No InterProcessTrigger) */

 				for (final Runnable r : runnableList) {

 					result = result.add(getExecutionTimeForRTARunnable(r, pu, executionCase));

@@ -179,7 +190,7 @@
 	 * 			synchronous execution time of the observed set

 	 */

 	private Time syncTypeOperation(final int indexforTrigger, final List<CallSequenceItem> callSequenceList, final List<Runnable> runnableList,

-			final HashMap<Task, Time> trt, final ProcessingUnit pu, final TimeType executionCase, final CpuRTA cpurta) {

+			final ProcessingUnit pu, final TimeType executionCase, final CpuRTA cpurta) {

 		Logger.getLogger(RuntimeUtilRTA.class).debug("TYPE: SYNC");

 		/* set the default result time variable as 0s */

 		Time result = FactoryUtil.createTime(BigInteger.ZERO, TimeUnit.PS);

@@ -190,7 +201,7 @@
 		final InterProcessTrigger ipt = (InterProcessTrigger) callSequenceList.get(indexforTrigger);

 		final Task triggeredGPUTask = cpurta.getModel().getSwModel().getTasks().stream().filter(t -> t.getStimuli().get(0).equals(ipt.getStimulus())).iterator()

 				.next();

-		result = result.add(trt.get(triggeredGPUTask));

+		result = result.add(cpurta.getTRT().get(triggeredGPUTask));

 		return result;

 	}

 

@@ -220,9 +231,114 @@
 		return result;

 	}

 

-	// TODO: getExecutionTimeForGPUTaskOnCPU

+	/**

+	 * Identify whether or not the given task has the OffloadingAsyncCosts Runnable (that takes costs into account in the Asynchronous mode)

+	 * which some triggering tasks do not have.

+	 * Since this method is used by CPURtaIA, the visibility should be 'protected'

+	 * 

+	 * @param task			the observed task

+	 * @param cpurta		the instance of CPURtaIA class that calls this method

+	 * 						(to access to the triggeringTaskList List<Task> variable that contains tasks with an InterProcessTrigger)

+	 * @return

+	 * 			boolean value of the result

+	 */

+	protected static boolean doesTaskHaveAsyncRunnable (final Task task, final CpuRTA cpurta) {

+		boolean result = false;

+		if (cpurta.getTriggeringTaskList().contains(task)) {

+			final List<CallSequenceItem> callList = SoftwareUtil.collectCalls(task, null, 

+					(call -> call instanceof TaskRunnableCall || call instanceof InterProcessTrigger || call instanceof ClearEvent

+							|| call instanceof SetEvent || call instanceof WaitEvent));

+			final int waitIndex = callList.indexOf(callList.stream().filter(s -> s instanceof WaitEvent).iterator().next());

+			final int clearIndex = callList.indexOf(callList.stream().filter(s -> s instanceof ClearEvent).iterator().next());

+			if ((clearIndex - waitIndex) > 1) {

+				result = true;

+			}

+		}

+		else {

+			Logger.getLogger(RuntimeUtilRTA.class).debug("ERROR: This task is not a triggering task!!");

+		}

+		return result;

+	}

 	

-	// TODO: doesTaskHaveAsyncRunnable

+	/**

+	 * Calculate execution time of the given task which was originally designed for GPU but newly mapped to CPU by Generic Algorithm Mapping.

+	 * It should ignore offloading runnables and take the required labels(read from pre-processing, write from post-processing) into account.

+	 * The method follows Read / Compute(Ticks) / Write semantic.

+	 * Read(Write)_Access_Time = Round_UP(Size_of_Read_Labels / 64.0 Bytes) * (Read_Latency / Frequency)

+	 * 

+	 * @param task				the observed task

+	 * @param runnableList		runnable list of the given task

+	 * @param pu				ProcessingUnit that would compute the given runnable (A57 or Denver)

+	 * @param executionCase		BCET, ACET, WCET

+	 * @param cpurta			the instance of CPURtaIA class that calls this method

+	 * 							(to access to the gpuToCpuLabels HashMap variable that contains List<Label> of required read & write labels)

+	 * @return

+	 * 			execution time of the observed task

+	 */

+	private Time getExecutionTimeForGPUTaskOnCPU(final Task task, final List<Runnable> runnableList, final ProcessingUnit pu, 

+			final TimeType executionCase, final CpuRTA cpurta) {

+		Logger.getLogger(RuntimeUtilRTA.class).debug("TYPE: GPUTaskOnCPU // " + "Task: " + task.getName());

+		Time result = FactoryUtil.createTime(BigInteger.ZERO, TimeUnit.PS);

+		Runnable funcRunnable = null;

+		for (final Runnable r : runnableList) {

+			final List<Ticks> thisTicksList = SoftwareUtil.getTicks(r, null);

+			if (thisTicksList.size() != 0) {

+				funcRunnable = r;

+				break;

+			}

+		}

+		final Time parameter = FactoryUtil.createTime(BigInteger.ONE, TimeUnit.S);

+		final double freq = AmaltheaServices.convertToHertz(pu.getFrequencyDomain().getDefaultValue()).longValue();

+		final HashMap<Task, List<Label>[]> gtcl = cpurta.getGTCL();

+		final List<Label>[] thisLabelList = gtcl.get(task);

+		final List<Label> readLabelList = thisLabelList[0];

+		final List<Label> writeLabelList = thisLabelList[1];

+		for (final Label l : readLabelList) {

+			Logger.getLogger(RuntimeUtilRTA.class).debug("Label(Read): " + l.getName() + " // (" + task.getName() + ")");

+		}

+		for (final Label l : writeLabelList) {

+			Logger.getLogger(RuntimeUtilRTA.class).debug("Label(Write): " + l.getName() + " // (" + task.getName() + ")");

+		}

+		double readLatency = 0;

+		double writeLatency = 0;

+		if (executionCase.equals(TimeType.WCET)) {

+			readLatency = pu.getAccessElements().get(0).getReadLatency().getUpperBound();

+			writeLatency = pu.getAccessElements().get(0).getWriteLatency().getUpperBound();

+		}

+		else if (executionCase.equals(TimeType.BCET)) {

+			readLatency = pu.getAccessElements().get(0).getReadLatency().getLowerBound();

+			writeLatency = pu.getAccessElements().get(0).getWriteLatency().getLowerBound();

+		}

+		else {

+			readLatency = pu.getAccessElements().get(0).getReadLatency().getAverage();

+			writeLatency = pu.getAccessElements().get(0).getWriteLatency().getAverage();

+		}

+		/* Read (LabelAccess): */

+		double readAccessParameter = 0;

+		double sizeofReadLabels = 0;

+		for (final Label rl : readLabelList) {

+			sizeofReadLabels += rl.getSize().getNumberBytes();

+		}

+		readAccessParameter = (Math.ceil(sizeofReadLabels / 64.0) * (readLatency / freq));

+		final Time readAccess = parameter.multiply(readAccessParameter);

+		result = result.add(readAccess); // LabelAccess(Read) added

+		/* Execution (Ticks): */

+		final List<Ticks> ticksList = SoftwareUtil.getTicks(funcRunnable, null);

+		for (final Ticks t : ticksList) {

+			final Time tickExecution = RuntimeUtil.getExecutionTimeForTicks(t, pu, executionCase);

+			result = result.add(tickExecution); // Execution(Ticks) added

+		}

+		/* Write (LabelAccess): */

+		double writeAccessParameter = 0;

+		double sizeofWriteLabels = 0;

+		for (final Label wl : writeLabelList) {

+			sizeofWriteLabels += wl.getSize().getNumberBytes();

+		}

+		writeAccessParameter = (Math.ceil(sizeofWriteLabels / 64.0) * (writeLatency / freq));

+		final Time writeAccess = parameter.multiply(writeAccessParameter);

+		result = result.add(writeAccess); // LabelAccess(Write) added

+		return result;

+	}

 	

 	/**

 	 * Calculate execution time of the given runnable.

@@ -234,7 +350,8 @@
 	 * @return

 	 * 			execution time of the observed runnable

 	 */

-	private Time getExecutionTimeForRTARunnable(final Runnable runnable, final ProcessingUnit pu, final TimeType executionCase) {

+	protected Time getExecutionTimeForRTARunnable(final Runnable runnable, final ProcessingUnit pu, final TimeType executionCase) {

+		Logger.getLogger(RuntimeUtilRTA.class).debug(executionCase.toString());

 		Time result = FactoryUtil.createTime(BigInteger.ZERO, TimeUnit.PS);

 		final double freq = AmaltheaServices.convertToHertz(pu.getFrequencyDomain().getDefaultValue()).longValue();

 		double readLatency = 0;

@@ -252,7 +369,7 @@
 			writeLatency = pu.getAccessElements().get(0).getWriteLatency().getAverage();

 		}

 		/* Read & Write Memory Access Time */

-		result = result.add(getRunnableMemoryAccessTime(runnable, freq, readLatency, writeLatency, executionCase));

+		result = result.add(getRunnableMemoryAccessTime(runnable, freq, readLatency, writeLatency));

 		

 		/* Execution (Ticks): */

 		final List<Ticks> ticksList = SoftwareUtil.getTicks(runnable, null);

@@ -262,8 +379,40 @@
 		}

 		return result;

 	}

-	

-	// TODO: getTaskMemoryAccessTime

+

+	/**

+	 * Calculate memory access time of the observed task.

+	 * Since this method is used by CPURtaIA, the visibility should be 'protected'

+	 * 

+	 * @param task				the observed task

+	 * @param pu				ProcessingUnit that would compute the given runnable (A57 or Denver)

+	 * @param executionCase		BCET, ACET, WCET

+	 * @return

+	 * 			memory access time of the observed task

+	 */

+	protected Time getTaskMemoryAccessTime (final Task task, final ProcessingUnit pu, final TimeType executionCase) {

+		Time result = FactoryUtil.createTime(BigInteger.ZERO, TimeUnit.PS);

+		final double freq = AmaltheaServices.convertToHertz(pu.getFrequencyDomain().getDefaultValue()).longValue();

+		final List<Runnable> runnableList = SoftwareUtil.getRunnableList(task, null);

+		double readLatency = 0;

+		double writeLatency = 0;

+		if (executionCase.equals(TimeType.WCET)) {

+			readLatency = pu.getAccessElements().get(0).getReadLatency().getUpperBound();

+			writeLatency = pu.getAccessElements().get(0).getWriteLatency().getUpperBound();

+		}

+		else if (executionCase.equals(TimeType.BCET)) {

+			readLatency = pu.getAccessElements().get(0).getReadLatency().getLowerBound();

+			writeLatency = pu.getAccessElements().get(0).getWriteLatency().getLowerBound();

+		}

+		else {

+			readLatency = pu.getAccessElements().get(0).getReadLatency().getAverage();

+			writeLatency = pu.getAccessElements().get(0).getWriteLatency().getAverage();

+		}

+		for(final Runnable r : runnableList ) {

+			result = result.add(getRunnableMemoryAccessTime(r, freq, readLatency, writeLatency));

+		}

+		return result;

+	}

 	

 	/**

 	 * Calculate memory access time of the observed runnable.

@@ -274,12 +423,11 @@
 	 * @param frequency			frequency value of the Processing Unit

 	 * @param readLatency		readLatency value of the Processing Unit

 	 * @param writeLatency		writeLatency value of the Processing Unit

-	 * @param executionCase		BCET, ACET, WCET

 	 * @return

 	 * 			memory access time of the observed runnable

 	 */

 	private Time getRunnableMemoryAccessTime (final Runnable runnable, final double frequency, 

-			final double readLatency, final double writeLatency, final TimeType executionCase) {

+			final double readLatency, final double writeLatency) {

 		Time result = FactoryUtil.createTime(BigInteger.ZERO, TimeUnit.PS);

 		final Time parameter = FactoryUtil.createTime(BigInteger.ONE, TimeUnit.S);

 		final List<LabelAccess> thisLAList = SoftwareUtil.getLabelAccessList(runnable, null);

@@ -314,7 +462,7 @@
 	 * @return

 	 * 			boolean value of the result

 	 */

-	private static boolean isTriggeringTask(final Task task) {

+	protected static boolean isTriggeringTask(final Task task) {

 		/* true: Triggering Task, false: Non-Triggering Task */

 		boolean result = false;

 		final List<CallSequenceItem> callList = SoftwareUtil.collectCalls(task, null, 

diff --git a/eclipse-tools/responseTime-analyzer/plugins/org.eclipse.app4mc.gsoc_rta/src/org/eclipse/app4mc/gsoc_rta/SharedConsts.java b/eclipse-tools/responseTime-analyzer/plugins/org.eclipse.app4mc.gsoc_rta/src/org/eclipse/app4mc/gsoc_rta/SharedConsts.java
index 9b017e6..2ab3f29 100644
--- a/eclipse-tools/responseTime-analyzer/plugins/org.eclipse.app4mc.gsoc_rta/src/org/eclipse/app4mc/gsoc_rta/SharedConsts.java
+++ b/eclipse-tools/responseTime-analyzer/plugins/org.eclipse.app4mc.gsoc_rta/src/org/eclipse/app4mc/gsoc_rta/SharedConsts.java
@@ -21,12 +21,13 @@
 	public static boolean onlyWrittenLabelsCE = true;
 	public static boolean synchronousOffloading = false;
 	public static boolean useModelTimeSlices = false;
-	public static boolean ignoreInfeasibility = false;
+	public static boolean ignoreInfeasibility = true;
 	public static int[] timeSlices;
 	public static long timeSliceLengthPS = 1000000000l;
 	public static final boolean levelIBusyPeriod = false;
 	public static TS_DERIV tsDeriv = TS_DERIV.TSxPrio;
 	public static OPT_TYPE optimize = OPT_TYPE.RESPONSETIMESUM;
+	public static int comParadigm = 0;
 	/*-----------End Measur. Configuration--------------–*/
 
 	/* Arbitrary Integer Array (GA scenario) */
diff --git a/eclipse-tools/responseTime-analyzer/plugins/org.eclipse.app4mc.gsoc_rta/src/org/eclipse/app4mc/gsoc_rta/ui/RTApp_WATERS19.java b/eclipse-tools/responseTime-analyzer/plugins/org.eclipse.app4mc.gsoc_rta/src/org/eclipse/app4mc/gsoc_rta/ui/RTApp_WATERS19.java
index 5c94b99..4df858e 100644
--- a/eclipse-tools/responseTime-analyzer/plugins/org.eclipse.app4mc.gsoc_rta/src/org/eclipse/app4mc/gsoc_rta/ui/RTApp_WATERS19.java
+++ b/eclipse-tools/responseTime-analyzer/plugins/org.eclipse.app4mc.gsoc_rta/src/org/eclipse/app4mc/gsoc_rta/ui/RTApp_WATERS19.java
@@ -50,7 +50,6 @@
 @SuppressWarnings("serial")

 public class RTApp_WATERS19 extends JFrame {

 	private HashMap<Integer, java.util.List<Task>> puTaskListHM;

-	private TimeType executionCase;

 	private JFrame frame;

 	private boolean iaEntered = false;

 	

@@ -164,7 +163,7 @@
 				}

 				cpurta.setIA(tpumap);

 				cpurta.setPUl(CommonUtils.getPUs(cpurta.getModel()));

-				puTaskListHM = cpurta.be_getPUTaskListHashMap();

+				puTaskListHM = cpurta.be_getPUTaskListHashMap(cpurta.getModel());

 				for (int i = 0; i < puTaskListHM.size(); i++) {

 					if (tListList.get(i).getItemCount() == 0) {

 						for (int j = 0; j < puTaskListHM.get(i).size(); j++) {	

@@ -196,23 +195,23 @@
 					}

 				}				

 				if (rdbtnSynchronous.isSelected()) {

-					cpurta.synchronousOffloading = true;

+					SharedConsts.synchronousOffloading = true;

 				}

 				else if (rdbtnAsynchronous.isSelected()) {

-					cpurta.synchronousOffloading = false;

+					SharedConsts.synchronousOffloading = false;

 				}

 				else {

 					JOptionPane.showMessageDialog(frame, "ERROR: You should choose an offloading mode! (Sync / Async)");

 					return ;

 				}

 				if (rdbtnWorstCase.isSelected()) {

-					executionCase = TimeType.WCET;

+					SharedConsts.timeType = TimeType.WCET;

 				}

 				else if (rdbtnAverageCase.isSelected()) {

-					executionCase = TimeType.ACET;

+					SharedConsts.timeType = TimeType.ACET;

 				}

 				else if (rdbtnBestCase.isSelected()) {

-					executionCase = TimeType.BCET;

+					SharedConsts.timeType = TimeType.BCET;

 				}

 				else {

 					JOptionPane.showMessageDialog(frame, "ERROR: You should choose an execution case! (Worst Case / Average Case / Best Case)");

@@ -231,7 +230,7 @@
 					if (rtListList.get(i).getItemCount() == 0) {

 						if (pu.getDefinition().getPuType().equals(PuType.CPU)) {

 							for (Task t : thisPUTaskList) {

-								thisRT = cpurta.preciseTestCPURT(t, thisPUTaskList, executionCase, pu);

+								thisRT = cpurta.preciseTestCPURT(t, thisPUTaskList, SharedConsts.timeType, pu);

 								if (thisRT.getValue().equals(BigInteger.valueOf(Long.MAX_VALUE))) {

 									rtListList.get(i).add("Non Scheduleable! => MAX Value");

 									flag = true;