Skip to main content
aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorChristian W. Damus2016-07-13 19:05:54 +0000
committerChristian W. Damus2016-07-13 19:51:04 +0000
commitddfb7b0caefdd1be212db31bde24b8a9feb225de (patch)
tree5c350c81ea1a9fb2985bb62195c06ff50a7a5174 /plugins/infra/emf
parentf68c766e5c5df1bb5c08fd65bc6f5464d3a58208 (diff)
downloadorg.eclipse.papyrus-ddfb7b0caefdd1be212db31bde24b8a9feb225de.tar.gz
org.eclipse.papyrus-ddfb7b0caefdd1be212db31bde24b8a9feb225de.tar.xz
org.eclipse.papyrus-ddfb7b0caefdd1be212db31bde24b8a9feb225de.zip
Bug 496299: Controlled Units as Integral Fragments
https://bugs.eclipse.org/bugs/show_bug.cgi?id=496299 Implement a new mode of controlled unit in Papyrus dubbed "shards". A shard is like any other sub-unit created up to and including the Neon release, except that it cannot be opened independently in the editor. The Papyrus editor, when asked to open a "shard", will instead open the root resource of the model. Likewise, the editor matcher normalizes editor inputs to the root resource of any shard. The graph of shard dependencies is inferred from a new workspace- wide index of cross-resource containment references, when it is available. Otherwise, the linkage of shards to their parent references is parsed on-the-fly from the shard annotation's reference (with a relatively efficient XML parsing that terminates after reading only a few lines of the XMI text). A new ResourceLocator is implemented to provide a pluggable hook for resource loading (including proxy resolution), to ensure when loading a shard resource that its parent resource chain is first loaded from the top down to ensure that all context of profile applications is available before loading the shard, itself, which may have stereotype applications that depend on those profile applications. The CoreMultiDiagramEditor installs this resource locator on the ModelSet; other applications (including in a non-Eclipse context) can make similar use of it. Some additional fixes are required in other core components to make the loading of referenced sharded models as in bug 458837 work: * the SemanticUMLContentProvider did not detect the final resolution of containment proxies that changes what looks look a model root object into just another intermediate element in the content tree. Besides that it would schedule a large number of redundant UI refreshes asynchronously (deferred) on the UI thread * the DiModel and NotationModel would load their adjuncts to the *.uml resource when that resource is created, not after it has been loaded. This is much too early and ends up causing the transactional editing domain to detect the attachment of a resource's contents at the end of loading as an attempt to edit the model during a read-only transaction, which logs an exception and bombs the UI action. Instead, these models now have snippets that load the *.di and *.notation resources after the semantic resource has been loaded. * the new model snippets required an additional fix in the loading of IModels to handle contributions of snippets and dependencies to models that are overridden by other IModels registered under the same ID, such as is the case with the NotationModel and the CSSNotationModel, which latter needs the snippet declared by the former * the IModels additionally need to ensure that they start snippets on loading of an existing model even when it is already found to be loaded in the ModelSet (as happens often in JUnit tests) * the AbstractModelFixture in the JUnit test framework is updated to ensure that the ModelSet is properly initialized, with its own snippets started and its IModels loaded and their snippets started * the basic uncontrol command now removes the shard annotation from the uncontrolled element/resource, if there was one. Because this bundle now supports a new feature (that being shards), it seems appropriate to bump its minor version number General-purpose changes in the core workspace model index framework that improve overall performance, of particular significance in large and highly fragmented models: Implement persistent storage of the workspace model index at workspace save to support quick start-up without parsing the entire workspace. Consolidation of indices: * run a single pool of indexing jobs and a single resource change listener to trigger (re)-indexing of files * all indices matching any given file process it * includes a new extension point from which all indices are loaded into the shared index manager to initialize them and do the work (cherry-picked from streams/2.0-maintenance) Change-Id: Ifd65a71c57134b69d873f17139f3cedbf11c5ba5
Diffstat (limited to 'plugins/infra/emf')
-rw-r--r--plugins/infra/emf/org.eclipse.papyrus.infra.emf/META-INF/MANIFEST.MF6
-rw-r--r--plugins/infra/emf/org.eclipse.papyrus.infra.emf/plugin.xml9
-rw-r--r--plugins/infra/emf/org.eclipse.papyrus.infra.emf/pom.xml4
-rw-r--r--plugins/infra/emf/org.eclipse.papyrus.infra.emf/schema/index.exsd119
-rw-r--r--plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/Activator.java42
-rw-r--r--plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/WorkspaceSaveHelper.java262
-rw-r--r--plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/internal/resource/AbstractCrossReferenceIndex.java404
-rw-r--r--plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/internal/resource/CrossReferenceIndex.java226
-rw-r--r--plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/internal/resource/CrossReferenceIndexHandler.java270
-rw-r--r--plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/internal/resource/InternalIndexUtil.java73
-rw-r--r--plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/internal/resource/OnDemandCrossReferenceIndex.java182
-rw-r--r--plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/internal/resource/StopParsing.java30
-rw-r--r--plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/internal/resource/index/IIndexSaveParticipant.java44
-rw-r--r--plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/internal/resource/index/IndexManager.java1075
-rw-r--r--plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/internal/resource/index/IndexPersistenceManager.java256
-rw-r--r--plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/internal/resource/index/InternalModelIndex.java118
-rw-r--r--plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/resource/ICrossReferenceIndex.java274
-rw-r--r--plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/resource/ShardResourceHelper.java418
-rw-r--r--plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/resource/ShardResourceLocator.java178
-rw-r--r--plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/resource/index/IWorkspaceModelIndexProvider.java27
-rw-r--r--plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/resource/index/WorkspaceModelIndex.java1107
21 files changed, 4330 insertions, 794 deletions
diff --git a/plugins/infra/emf/org.eclipse.papyrus.infra.emf/META-INF/MANIFEST.MF b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/META-INF/MANIFEST.MF
index eb599b54ca7..63e6c7970ef 100644
--- a/plugins/infra/emf/org.eclipse.papyrus.infra.emf/META-INF/MANIFEST.MF
+++ b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/META-INF/MANIFEST.MF
@@ -4,18 +4,20 @@ Export-Package: org.eclipse.papyrus.infra.emf,
org.eclipse.papyrus.infra.emf.advice,
org.eclipse.papyrus.infra.emf.commands,
org.eclipse.papyrus.infra.emf.edit.domain,
+ org.eclipse.papyrus.infra.emf.internal.resource;x-internal:=true,
+ org.eclipse.papyrus.infra.emf.internal.resource.index;x-internal:=true,
org.eclipse.papyrus.infra.emf.requests,
org.eclipse.papyrus.infra.emf.resource,
org.eclipse.papyrus.infra.emf.resource.index,
org.eclipse.papyrus.infra.emf.spi.resolver,
org.eclipse.papyrus.infra.emf.utils
-Require-Bundle: org.eclipse.papyrus.infra.core;bundle-version="[2.0.0,3.0.0)";visibility:=reexport,
+Require-Bundle: org.eclipse.papyrus.infra.core;bundle-version="[2.1.0,3.0.0)";visibility:=reexport,
org.eclipse.core.expressions;bundle-version="[3.5.0,4.0.0)";visibility:=reexport,
org.eclipse.gmf.runtime.emf.type.core;bundle-version="[1.9.0,2.0.0)";visibility:=reexport,
org.eclipse.papyrus.emf.facet.custom.core;bundle-version="[2.0.0,3.0.0)";visibility:=reexport
Bundle-Vendor: Eclipse Modeling Project
Bundle-ActivationPolicy: lazy
-Bundle-Version: 2.0.100.qualifier
+Bundle-Version: 2.2.0.qualifier
Bundle-Name: EMF Tools
Bundle-Activator: org.eclipse.papyrus.infra.emf.Activator
Bundle-ManifestVersion: 2
diff --git a/plugins/infra/emf/org.eclipse.papyrus.infra.emf/plugin.xml b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/plugin.xml
index de27f193cdc..3fa4aef1e73 100644
--- a/plugins/infra/emf/org.eclipse.papyrus.infra.emf/plugin.xml
+++ b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/plugin.xml
@@ -18,6 +18,8 @@
-->
<plugin>
<extension-point id="dependencyUpdateParticipant" name="Dependency Update Participants" schema="schema/dependencyUpdateParticipant.exsd"/>
+ <extension-point id="index" name="Workspace Model Index" schema="schema/index.exsd"/>
+
<extension
point="org.eclipse.papyrus.infra.types.core.elementTypeSetConfiguration">
<elementTypeSet
@@ -26,4 +28,11 @@
</elementTypeSet>
</extension>
+ <extension
+ point="org.eclipse.papyrus.infra.emf.index">
+ <indexProvider
+ class="org.eclipse.papyrus.infra.emf.internal.resource.CrossReferenceIndex$IndexProvider">
+ </indexProvider>
+ </extension>
+
</plugin>
diff --git a/plugins/infra/emf/org.eclipse.papyrus.infra.emf/pom.xml b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/pom.xml
index 4f142a8b4fe..157d5cc080a 100644
--- a/plugins/infra/emf/org.eclipse.papyrus.infra.emf/pom.xml
+++ b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/pom.xml
@@ -7,6 +7,6 @@
<version>0.0.1-SNAPSHOT</version>
</parent>
<artifactId>org.eclipse.papyrus.infra.emf</artifactId>
- <version>2.0.100-SNAPSHOT</version>
+ <version>2.2.0-SNAPSHOT</version>
<packaging>eclipse-plugin</packaging>
-</project> \ No newline at end of file
+</project>
diff --git a/plugins/infra/emf/org.eclipse.papyrus.infra.emf/schema/index.exsd b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/schema/index.exsd
new file mode 100644
index 00000000000..f70c104a3a8
--- /dev/null
+++ b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/schema/index.exsd
@@ -0,0 +1,119 @@
+<?xml version='1.0' encoding='UTF-8'?>
+<!-- Schema file written by PDE -->
+<schema targetNamespace="org.eclipse.papyrus.infra.emf" xmlns="http://www.w3.org/2001/XMLSchema">
+<annotation>
+ <appinfo>
+ <meta.schema plugin="org.eclipse.papyrus.infra.emf" id="index" name="Workspace Model Index"/>
+ </appinfo>
+ <documentation>
+ Registration of workspace model indices.
+ </documentation>
+ </annotation>
+
+ <element name="extension">
+ <annotation>
+ <appinfo>
+ <meta.element />
+ </appinfo>
+ </annotation>
+ <complexType>
+ <sequence>
+ <element ref="indexProvider" minOccurs="1" maxOccurs="unbounded"/>
+ </sequence>
+ <attribute name="point" type="string" use="required">
+ <annotation>
+ <documentation>
+
+ </documentation>
+ </annotation>
+ </attribute>
+ <attribute name="id" type="string">
+ <annotation>
+ <documentation>
+
+ </documentation>
+ </annotation>
+ </attribute>
+ <attribute name="name" type="string">
+ <annotation>
+ <documentation>
+
+ </documentation>
+ <appinfo>
+ <meta.attribute translatable="true"/>
+ </appinfo>
+ </annotation>
+ </attribute>
+ </complexType>
+ </element>
+
+ <element name="indexProvider">
+ <annotation>
+ <documentation>
+ A supplier of a &lt;tt&gt;WorkspaceModelIndex&lt;/tt&gt; to add to the indexing subsystem.
+ </documentation>
+ </annotation>
+ <complexType>
+ <attribute name="class" type="string" use="required">
+ <annotation>
+ <documentation>
+ The class implementing the index provider.
+ </documentation>
+ <appinfo>
+ <meta.attribute kind="java" basedOn=":org.eclipse.papyrus.infra.emf.resource.index.IWorkspaceModelIndexProvider"/>
+ </appinfo>
+ </annotation>
+ </attribute>
+ </complexType>
+ </element>
+
+ <annotation>
+ <appinfo>
+ <meta.section type="since"/>
+ </appinfo>
+ <documentation>
+ 2.1
+ </documentation>
+ </annotation>
+
+ <annotation>
+ <appinfo>
+ <meta.section type="examples"/>
+ </appinfo>
+ <documentation>
+ [Enter extension point usage example here.]
+ </documentation>
+ </annotation>
+
+ <annotation>
+ <appinfo>
+ <meta.section type="apiinfo"/>
+ </appinfo>
+ <documentation>
+ [Enter API information here.]
+ </documentation>
+ </annotation>
+
+ <annotation>
+ <appinfo>
+ <meta.section type="implementation"/>
+ </appinfo>
+ <documentation>
+ [Enter information about supplied implementation of this extension point.]
+ </documentation>
+ </annotation>
+
+ <annotation>
+ <appinfo>
+ <meta.section type="copyright"/>
+ </appinfo>
+ <documentation>
+ Copyright (c) 2016 Christian W. Damus and others.
+All rights reserved. This program and the accompanying materials
+are made available under the terms of the Eclipse Public License v1.0
+which accompanies this distribution, and is available at
+http://www.eclipse.org/legal/epl-v10.html
+ </documentation>
+ </annotation>
+
+</schema>
diff --git a/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/Activator.java b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/Activator.java
index 0698bdea266..a28b0c13ec4 100644
--- a/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/Activator.java
+++ b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/Activator.java
@@ -8,14 +8,22 @@
*
* Contributors:
* Camille Letavernier (camille.letavernier@cea.fr) - Initial API and implementation
- * Christian W. Damus - bug 485220
+ * Christian W. Damus - bugs 485220, 496299
*
*****************************************************************************/
package org.eclipse.papyrus.infra.emf;
import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import org.eclipse.core.resources.ISavedState;
+import org.eclipse.core.resources.ResourcesPlugin;
+import org.eclipse.core.runtime.IProgressMonitor;
+import org.eclipse.core.runtime.IStatus;
import org.eclipse.core.runtime.Plugin;
+import org.eclipse.core.runtime.Status;
+import org.eclipse.core.runtime.jobs.Job;
import org.eclipse.emf.ecore.EClassifier;
import org.eclipse.emf.ecore.EObject;
import org.eclipse.emf.ecore.EPackage;
@@ -24,6 +32,8 @@ import org.eclipse.emf.ecore.resource.impl.ResourceSetImpl;
import org.eclipse.papyrus.emf.facet.custom.core.ICustomizationManager;
import org.eclipse.papyrus.emf.facet.custom.core.ICustomizationManagerFactory;
import org.eclipse.papyrus.infra.core.log.LogHelper;
+import org.eclipse.papyrus.infra.emf.internal.resource.index.IndexManager;
+import org.eclipse.papyrus.infra.emf.internal.resource.index.IndexPersistenceManager;
import org.eclipse.papyrus.infra.emf.spi.resolver.EObjectResolverService;
import org.eclipse.papyrus.infra.emf.spi.resolver.IEObjectResolver;
import org.osgi.framework.BundleContext;
@@ -66,6 +76,30 @@ public class Activator extends Plugin {
log = new LogHelper(this);
resolverService = new EObjectResolverService(context);
+
+ // Set up for workspace save and loading from saved state
+ WorkspaceSaveHelper saveHelper = new WorkspaceSaveHelper();
+ List<WorkspaceSaveHelper.SaveDelegate> saveDelegates = getSaveDelegates();
+ ISavedState state = ResourcesPlugin.getWorkspace().addSaveParticipant(
+ PLUGIN_ID,
+ saveHelper.createSaveParticipant(saveDelegates));
+ if ((state != null) && (state.getSaveNumber() != 0)) {
+ saveHelper.initializeSaveDelegates(state, saveDelegates);
+ }
+
+ // Kick off the workspace model indexing system
+ new Job("Initialize workspace model index") {
+ {
+ setSystem(true);
+ }
+
+ @Override
+ protected IStatus run(IProgressMonitor monitor) {
+ IndexManager.getInstance();
+
+ return Status.OK_STATUS;
+ }
+ }.schedule();
}
@Override
@@ -127,4 +161,10 @@ public class Activator extends Plugin {
return resolverService;
}
+ private List<WorkspaceSaveHelper.SaveDelegate> getSaveDelegates() {
+ return Arrays.asList(
+ new WorkspaceSaveHelper.SaveDelegate("index", //$NON-NLS-1$
+ IndexPersistenceManager.INSTANCE.getSaveParticipant(),
+ IndexPersistenceManager.INSTANCE::initialize));
+ }
}
diff --git a/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/WorkspaceSaveHelper.java b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/WorkspaceSaveHelper.java
new file mode 100644
index 00000000000..f01728c8206
--- /dev/null
+++ b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/WorkspaceSaveHelper.java
@@ -0,0 +1,262 @@
+/*****************************************************************************
+ * Copyright (c) 2016 Christian W. Damus and others.
+ *
+ * All rights reserved. This program and the accompanying materials
+ * are made available under the terms of the Eclipse Public License v1.0
+ * which accompanies this distribution, and is available at
+ * http://www.eclipse.org/legal/epl-v10.html
+ *
+ * Contributors:
+ * Christian W. Damus - Initial API and implementation
+ *
+ *****************************************************************************/
+
+package org.eclipse.papyrus.infra.emf;
+
+import java.lang.reflect.InvocationHandler;
+import java.lang.reflect.Method;
+import java.lang.reflect.Proxy;
+import java.util.Collection;
+import java.util.List;
+import java.util.function.BiConsumer;
+import java.util.function.Supplier;
+import java.util.stream.Stream;
+
+import org.eclipse.core.resources.ISaveContext;
+import org.eclipse.core.resources.ISaveParticipant;
+import org.eclipse.core.resources.ISavedState;
+import org.eclipse.core.runtime.CoreException;
+import org.eclipse.core.runtime.IPath;
+import org.eclipse.core.runtime.Path;
+
+import com.google.common.collect.ImmutableList;
+
+/**
+ * Helper class for delegating workspace save participation.
+ */
+class WorkspaceSaveHelper {
+
+ /**
+ * Initializes me.
+ */
+ WorkspaceSaveHelper() {
+ super();
+ }
+
+ void initializeSaveDelegates(ISavedState state, List<SaveDelegate> saveDelegates) throws CoreException {
+ SaveDelegate[] currentDelegate = new SaveDelegate[] { null };
+ state = delegatingSavedState(state, () -> currentDelegate[0]);
+
+ for (SaveDelegate next : saveDelegates) {
+ currentDelegate[0] = next;
+ next.initializer.accept(state);
+ }
+ }
+
+ ISaveParticipant createSaveParticipant(List<SaveDelegate> saveDelegates) {
+ return new DelegatingSaveParticipant(saveDelegates);
+ }
+
+ /**
+ * Creates a save context that provides a view of path mappings specific to the current
+ * save delegate in the sequence.
+ *
+ * @param context
+ * the real save context
+ * @param currentDelegate
+ * a supplier of the current save delegate
+ *
+ * @return the delegating save context
+ */
+ private ISaveContext delegatingSaveContext(ISaveContext context, Supplier<? extends SaveDelegate> currentDelegate) {
+ InvocationHandler handler = new InvocationHandler() {
+
+ @Override
+ public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
+ if (method.getDeclaringClass() == ISaveContext.class) {
+ switch (method.getName()) {
+ case "getFiles":
+ if (method.getParameterCount() == 0) {
+ // This is our getFiles
+ return getFiles();
+ }
+ break;
+ case "map":
+ if (method.getParameterCount() == 2) {
+ // This is our map(IPath, IPath)
+ return map((IPath) args[0], (IPath) args[1]);
+ }
+ break;
+ }
+ }
+
+ return method.invoke(context, args);
+ }
+
+ private IPath[] getFiles() {
+ // Get only those with our particular prefix and strip that prefix
+ IPath prefix = currentDelegate.get().pathPrefix;
+ return Stream.of(context.getFiles())
+ .filter(prefix::isPrefixOf)
+ .map(p -> p.makeRelativeTo(prefix))
+ .toArray(IPath[]::new);
+ }
+
+ private Void map(IPath path, IPath location) {
+ // Prepend the supplied path key with our unique prefix
+ context.map(currentDelegate.get().pathPrefix.append(path), location);
+ return null;
+ }
+ };
+
+ return (ISaveContext) Proxy.newProxyInstance(getClass().getClassLoader(),
+ new Class<?>[] { ISaveContext.class },
+ handler);
+ }
+
+ /**
+ * Creates a saved state that provides a view of path mappings specific to the current
+ * save delegate in the sequence.
+ *
+ * @param state
+ * the real saved state
+ * @param currentDelegate
+ * a supplier of the current save delegate
+ *
+ * @return the delegating saved state
+ */
+ private ISavedState delegatingSavedState(ISavedState state, Supplier<? extends SaveDelegate> currentDelegate) {
+ InvocationHandler handler = new InvocationHandler() {
+
+ @Override
+ public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
+ if (method.getDeclaringClass() == ISavedState.class) {
+ switch (method.getName()) {
+ case "getFiles":
+ if (method.getParameterCount() == 0) {
+ // This is our getFiles
+ return getFiles();
+ }
+ break;
+ case "lookup":
+ if (method.getParameterCount() == 1) {
+ // This is our lookup(IPath)
+ return lookup((IPath) args[0]);
+ }
+ break;
+ }
+ }
+
+ return method.invoke(state, args);
+ }
+
+ private IPath[] getFiles() {
+ // Get only those with our particular prefix and strip that prefix
+ IPath prefix = currentDelegate.get().pathPrefix;
+ return Stream.of(state.getFiles())
+ .filter(prefix::isPrefixOf)
+ .map(p -> p.makeRelativeTo(prefix))
+ .toArray(IPath[]::new);
+ }
+
+ private IPath lookup(IPath path) {
+ // Prepend the supplied path key with our unique prefix
+ return state.lookup(currentDelegate.get().pathPrefix.append(path));
+ }
+ };
+
+ return (ISavedState) Proxy.newProxyInstance(getClass().getClassLoader(),
+ new Class<?>[] { ISavedState.class },
+ handler);
+ }
+
+ //
+ // Nested types
+ //
+
+ final static class SaveDelegate {
+ final IPath pathPrefix;
+ final ISaveParticipant participant;
+ final InitAction initializer;
+
+ SaveDelegate(String pathPrefix, ISaveParticipant participant, InitAction initializer) {
+ super();
+
+ this.pathPrefix = new Path(pathPrefix);
+ this.participant = participant;
+ this.initializer = initializer;
+ }
+ }
+
+ // This delegating participant only handles full saves
+ private class DelegatingSaveParticipant implements ISaveParticipant {
+ private final List<SaveDelegate> delegates;
+
+ DelegatingSaveParticipant(Collection<? extends SaveDelegate> delegates) {
+ super();
+
+ this.delegates = ImmutableList.copyOf(delegates);
+ }
+
+ @Override
+ public void prepareToSave(ISaveContext context) throws CoreException {
+ if (context.getKind() == ISaveContext.FULL_SAVE) {
+ iterate(context, ISaveParticipant::prepareToSave);
+ }
+ }
+
+ @Override
+ public void saving(ISaveContext context) throws CoreException {
+ if (context.getKind() == ISaveContext.FULL_SAVE) {
+ iterate(context, ISaveParticipant::saving);
+
+ // Declare full participation to increment the save number
+ context.needSaveNumber();
+ }
+ }
+
+ @Override
+ public void doneSaving(ISaveContext context) {
+ if (context.getKind() == ISaveContext.FULL_SAVE) {
+ safeIterate(context, ISaveParticipant::doneSaving);
+ }
+ }
+
+ @Override
+ public void rollback(ISaveContext context) {
+ if (context.getKind() == ISaveContext.FULL_SAVE) {
+ safeIterate(context, ISaveParticipant::rollback);
+ }
+ }
+
+ void iterate(ISaveContext context, SaveAction saveAction) throws CoreException {
+ SaveDelegate[] current = { null };
+ ISaveContext privateContext = delegatingSaveContext(context, () -> current[0]);
+
+ for (SaveDelegate next : delegates) {
+ current[0] = next;
+ saveAction.accept(next.participant, privateContext);
+ }
+ }
+
+ void safeIterate(ISaveContext context, BiConsumer<? super ISaveParticipant, ? super ISaveContext> saveAction) {
+ SaveDelegate[] current = { null };
+ ISaveContext privateContext = delegatingSaveContext(context, () -> current[0]);
+
+ for (SaveDelegate next : delegates) {
+ current[0] = next;
+ saveAction.accept(next.participant, privateContext);
+ }
+ }
+ }
+
+ @FunctionalInterface
+ interface InitAction {
+ void accept(ISavedState state) throws CoreException;
+ }
+
+ @FunctionalInterface
+ interface SaveAction {
+ void accept(ISaveParticipant participant, ISaveContext context) throws CoreException;
+ }
+}
diff --git a/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/internal/resource/AbstractCrossReferenceIndex.java b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/internal/resource/AbstractCrossReferenceIndex.java
new file mode 100644
index 00000000000..82153f1a84b
--- /dev/null
+++ b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/internal/resource/AbstractCrossReferenceIndex.java
@@ -0,0 +1,404 @@
+/*****************************************************************************
+ * Copyright (c) 2016 Christian W. Damus and others.
+ *
+ * All rights reserved. This program and the accompanying materials
+ * are made available under the terms of the Eclipse Public License v1.0
+ * which accompanies this distribution, and is available at
+ * http://www.eclipse.org/legal/epl-v10.html
+ *
+ * Contributors:
+ * Christian W. Damus - Initial API and implementation
+ *
+ *****************************************************************************/
+
+package org.eclipse.papyrus.infra.emf.internal.resource;
+
+import java.util.Collections;
+import java.util.Map;
+import java.util.Queue;
+import java.util.Set;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.Future;
+import java.util.stream.Collectors;
+
+import org.eclipse.core.runtime.CoreException;
+import org.eclipse.core.runtime.IStatus;
+import org.eclipse.core.runtime.Status;
+import org.eclipse.emf.common.util.URI;
+import org.eclipse.papyrus.infra.emf.Activator;
+import org.eclipse.papyrus.infra.emf.resource.ICrossReferenceIndex;
+
+import com.google.common.collect.HashMultimap;
+import com.google.common.collect.ImmutableSet;
+import com.google.common.collect.ImmutableSetMultimap;
+import com.google.common.collect.Lists;
+import com.google.common.collect.SetMultimap;
+import com.google.common.collect.Sets;
+import com.google.common.util.concurrent.ListenableFuture;
+
+/**
+ * Common implementation of a cross-reference index in the workspace.
+ */
+public abstract class AbstractCrossReferenceIndex implements ICrossReferenceIndex {
+
+ public static final String SHARD_ANNOTATION_SOURCE = "http://www.eclipse.org/papyrus/2016/resource/shard"; //$NON-NLS-1$
+
+ static final int MAX_INDEX_JOBS = 5;
+
+ final Object sync = new Object();
+
+ final SetMultimap<URI, URI> outgoingReferences = HashMultimap.create();
+ final SetMultimap<URI, URI> incomingReferences = HashMultimap.create();
+
+ final SetMultimap<URI, URI> resourceToShards = HashMultimap.create();
+ final SetMultimap<URI, URI> shardToParents = HashMultimap.create();
+
+ // These are abstracted as URIs without extension
+ SetMultimap<URI, URI> aggregateOutgoingReferences;
+ SetMultimap<URI, URI> aggregateIncomingReferences;
+ SetMultimap<URI, URI> aggregateResourceToShards;
+ SetMultimap<URI, URI> aggregateShardToParents;
+ final SetMultimap<URI, String> shards = HashMultimap.create();
+
+ /**
+ * Initializes me.
+ */
+ AbstractCrossReferenceIndex() {
+ super();
+ }
+
+ //
+ // Queries
+ //
+
+ @Override
+ public ListenableFuture<SetMultimap<URI, URI>> getOutgoingCrossReferencesAsync() {
+ return afterIndex(getOutgoingCrossReferencesCallable());
+ }
+
+ @Override
+ public SetMultimap<URI, URI> getOutgoingCrossReferences() throws CoreException {
+ return sync(afterIndex(getOutgoingCrossReferencesCallable()));
+ }
+
+ Callable<SetMultimap<URI, URI>> getOutgoingCrossReferencesCallable() {
+ return sync(() -> ImmutableSetMultimap.copyOf(outgoingReferences));
+ }
+
+ @Override
+ public ListenableFuture<Set<URI>> getOutgoingCrossReferencesAsync(URI resourceURI) {
+ return afterIndex(getOutgoingCrossReferencesCallable(resourceURI));
+ }
+
+ @Override
+ public Set<URI> getOutgoingCrossReferences(URI resourceURI) throws CoreException {
+ return sync(afterIndex(getOutgoingCrossReferencesCallable(resourceURI)));
+ }
+
+ Callable<Set<URI>> getOutgoingCrossReferencesCallable(URI resourceURI) {
+ return sync(() -> {
+ String ext = resourceURI.fileExtension();
+ URI withoutExt = resourceURI.trimFileExtension();
+ Set<URI> result = getAggregateOutgoingCrossReferences().get(withoutExt).stream()
+ .map(uri -> uri.appendFileExtension(ext))
+ .collect(Collectors.toSet());
+
+ return Collections.unmodifiableSet(result);
+ });
+ }
+
+ SetMultimap<URI, URI> getAggregateOutgoingCrossReferences() {
+ SetMultimap<URI, URI> result;
+
+ synchronized (sync) {
+ if (aggregateOutgoingReferences == null) {
+ // Compute the aggregate now
+ aggregateOutgoingReferences = HashMultimap.create();
+ for (Map.Entry<URI, URI> next : outgoingReferences.entries()) {
+ aggregateOutgoingReferences.put(next.getKey().trimFileExtension(),
+ next.getValue().trimFileExtension());
+ }
+ }
+
+ result = aggregateOutgoingReferences;
+ }
+
+ return result;
+ }
+
+ @Override
+ public ListenableFuture<SetMultimap<URI, URI>> getIncomingCrossReferencesAsync() {
+ return afterIndex(getIncomingCrossReferencesCallable());
+ }
+
+ @Override
+ public SetMultimap<URI, URI> getIncomingCrossReferences() throws CoreException {
+ return sync(afterIndex(getIncomingCrossReferencesCallable()));
+ }
+
+ Callable<SetMultimap<URI, URI>> getIncomingCrossReferencesCallable() {
+ return sync(() -> ImmutableSetMultimap.copyOf(incomingReferences));
+ }
+
+ @Override
+ public ListenableFuture<Set<URI>> getIncomingCrossReferencesAsync(URI resourceURI) {
+ return afterIndex(getIncomingCrossReferencesCallable(resourceURI));
+ }
+
+ @Override
+ public Set<URI> getIncomingCrossReferences(URI resourceURI) throws CoreException {
+ return sync(afterIndex(getIncomingCrossReferencesCallable(resourceURI)));
+ }
+
+ Callable<Set<URI>> getIncomingCrossReferencesCallable(URI resourceURI) {
+ return sync(() -> {
+ String ext = resourceURI.fileExtension();
+ URI withoutExt = resourceURI.trimFileExtension();
+ Set<URI> result = getAggregateIncomingCrossReferences().get(withoutExt).stream()
+ .map(uri -> uri.appendFileExtension(ext))
+ .collect(Collectors.toSet());
+
+ return Collections.unmodifiableSet(result);
+ });
+ }
+
+ SetMultimap<URI, URI> getAggregateIncomingCrossReferences() {
+ SetMultimap<URI, URI> result;
+
+ synchronized (sync) {
+ if (aggregateIncomingReferences == null) {
+ // Compute the aggregate now
+ aggregateIncomingReferences = HashMultimap.create();
+ for (Map.Entry<URI, URI> next : incomingReferences.entries()) {
+ aggregateIncomingReferences.put(next.getKey().trimFileExtension(),
+ next.getValue().trimFileExtension());
+ }
+ }
+
+ result = aggregateIncomingReferences;
+ }
+
+ return result;
+ }
+
+ @Override
+ public ListenableFuture<Boolean> isShardAsync(URI resourceURI) {
+ return afterIndex(getIsShardCallable(resourceURI));
+ }
+
+ @Override
+ public boolean isShard(URI resourceURI) throws CoreException {
+ return sync(afterIndex(getIsShardCallable(resourceURI)));
+ }
+
+ final <V> V sync(Future<V> future) throws CoreException {
+ try {
+ return future.get();
+ } catch (InterruptedException e) {
+ throw new CoreException(Status.CANCEL_STATUS);
+ } catch (ExecutionException e) {
+ throw new CoreException(new Status(IStatus.ERROR, Activator.PLUGIN_ID, "Failed to access the resource shard index", e));
+ }
+ }
+
+ Callable<Boolean> getIsShardCallable(URI shardURI) {
+ return sync(() -> isShard0(shardURI.trimFileExtension()));
+ }
+
+ boolean isShard0(URI uriWithoutExtension) {
+ return !shards.get(uriWithoutExtension).isEmpty();
+ }
+
+ void setShard(URI resourceURI, boolean isShard) {
+ if (isShard) {
+ shards.put(resourceURI.trimFileExtension(), resourceURI.fileExtension());
+ } else {
+ shards.remove(resourceURI.trimFileExtension(), resourceURI.fileExtension());
+ }
+ }
+
+ @Override
+ public ListenableFuture<SetMultimap<URI, URI>> getShardsAsync() {
+ return afterIndex(getShardsCallable());
+ }
+
+ @Override
+ public SetMultimap<URI, URI> getShards() throws CoreException {
+ return sync(afterIndex(getShardsCallable()));
+ }
+
+ Callable<SetMultimap<URI, URI>> getShardsCallable() {
+ return sync(() -> ImmutableSetMultimap.copyOf(resourceToShards));
+ }
+
+ @Override
+ public ListenableFuture<Set<URI>> getShardsAsync(URI resourceURI) {
+ return afterIndex(getShardsCallable(resourceURI));
+ }
+
+ @Override
+ public Set<URI> getShards(URI resourceURI) throws CoreException {
+ return sync(afterIndex(getShardsCallable(resourceURI)));
+ }
+
+ Callable<Set<URI>> getShardsCallable(URI shardURI) {
+ return sync(() -> {
+ String ext = shardURI.fileExtension();
+ URI withoutExt = shardURI.trimFileExtension();
+ Set<URI> result = getAggregateShards().get(withoutExt).stream()
+ // Only those that actually are shards
+ .filter(AbstractCrossReferenceIndex.this::isShard0)
+ .map(uri -> uri.appendFileExtension(ext))
+ .collect(Collectors.toSet());
+
+ return Collections.unmodifiableSet(result);
+ });
+ }
+
+ SetMultimap<URI, URI> getAggregateShards() {
+ SetMultimap<URI, URI> result;
+
+ synchronized (sync) {
+ if (aggregateResourceToShards == null) {
+ // Compute the aggregate now
+ aggregateResourceToShards = HashMultimap.create();
+ for (Map.Entry<URI, URI> next : resourceToShards.entries()) {
+ aggregateResourceToShards.put(next.getKey().trimFileExtension(),
+ next.getValue().trimFileExtension());
+ }
+ }
+
+ result = aggregateResourceToShards;
+ }
+
+ return result;
+ }
+
+ @Override
+ public ListenableFuture<Set<URI>> getParentsAsync(URI shardURI) {
+ return afterIndex(getParentsCallable(shardURI));
+ }
+
+ @Override
+ public Set<URI> getParents(URI shardURI) throws CoreException {
+ return sync(afterIndex(getParentsCallable(shardURI)));
+ }
+
+ Callable<Set<URI>> getParentsCallable(URI shardURI) {
+ return sync(() -> {
+ Set<URI> result;
+ URI withoutExt = shardURI.trimFileExtension();
+
+ // If it's not a shard, it has no parents, by definition
+ if (!isShard0(withoutExt)) {
+ result = Collections.emptySet();
+ } else {
+ String ext = shardURI.fileExtension();
+ result = getAggregateShardToParents().get(withoutExt).stream()
+ .map(uri -> uri.appendFileExtension(ext))
+ .collect(Collectors.toSet());
+ result = Collections.unmodifiableSet(result);
+ }
+
+ return result;
+ });
+ }
+
+ SetMultimap<URI, URI> getAggregateShardToParents() {
+ SetMultimap<URI, URI> result;
+
+ synchronized (sync) {
+ if (aggregateShardToParents == null) {
+ // Compute the aggregate now
+ aggregateShardToParents = HashMultimap.create();
+ for (Map.Entry<URI, URI> next : shardToParents.entries()) {
+ aggregateShardToParents.put(next.getKey().trimFileExtension(),
+ next.getValue().trimFileExtension());
+ }
+ }
+
+ result = aggregateShardToParents;
+ }
+
+ return result;
+ }
+
+ @Override
+ public ListenableFuture<Set<URI>> getRootsAsync(URI shardURI) {
+ return afterIndex(getRootsCallable(shardURI));
+ }
+
+ @Override
+ public Set<URI> getRoots(URI shardURI) throws CoreException {
+ return sync(afterIndex(getRootsCallable(shardURI)));
+ }
+
+ Callable<Set<URI>> getRootsCallable(URI shardURI) {
+ return sync(() -> {
+ Set<URI> result;
+ URI withoutExt = shardURI.trimFileExtension();
+
+ // If it's not a shard, it has no roots, by definition
+ if (!isShard0(withoutExt)) {
+ result = Collections.emptySet();
+ } else {
+ // TODO: Cache this?
+ ImmutableSet.Builder<URI> resultBuilder = ImmutableSet.builder();
+
+ SetMultimap<URI, URI> shardToParents = getAggregateShardToParents();
+
+ // Breadth-first search of the parent graph
+ Queue<URI> queue = Lists.newLinkedList();
+ Set<URI> cycleDetect = Sets.newHashSet();
+ String ext = shardURI.fileExtension();
+ queue.add(withoutExt);
+
+ for (URI next = queue.poll(); next != null; next = queue.poll()) {
+ if (cycleDetect.add(next)) {
+ if (shardToParents.containsKey(next)) {
+ queue.addAll(shardToParents.get(next));
+ } else {
+ // It's a root
+ resultBuilder.add(next.appendFileExtension(ext));
+ }
+ }
+ }
+
+ result = resultBuilder.build();
+ }
+
+ return result;
+ });
+ }
+
+ final <V> Callable<V> sync(Callable<V> callable) {
+ return new SyncCallable<V>() {
+ @Override
+ protected V doCall() throws Exception {
+ return callable.call();
+ }
+ };
+ }
+
+ //
+ // Indexing
+ //
+
+ abstract <V> ListenableFuture<V> afterIndex(Callable<V> callable);
+
+ //
+ // Nested types
+ //
+
+ private abstract class SyncCallable<V> implements Callable<V> {
+ @Override
+ public final V call() throws Exception {
+ synchronized (sync) {
+ return doCall();
+ }
+ }
+
+ protected abstract V doCall() throws Exception;
+ }
+}
diff --git a/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/internal/resource/CrossReferenceIndex.java b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/internal/resource/CrossReferenceIndex.java
new file mode 100644
index 00000000000..c51c3ce56fd
--- /dev/null
+++ b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/internal/resource/CrossReferenceIndex.java
@@ -0,0 +1,226 @@
+/*****************************************************************************
+ * Copyright (c) 2016 Christian W. Damus and others.
+ *
+ * All rights reserved. This program and the accompanying materials
+ * are made available under the terms of the Eclipse Public License v1.0
+ * which accompanies this distribution, and is available at
+ * http://www.eclipse.org/legal/epl-v10.html
+ *
+ * Contributors:
+ * Christian W. Damus - Initial API and implementation
+ *
+ *****************************************************************************/
+
+package org.eclipse.papyrus.infra.emf.internal.resource;
+
+import java.io.InputStream;
+import java.io.Serializable;
+import java.util.ArrayList;
+import java.util.Set;
+import java.util.concurrent.Callable;
+import java.util.stream.Collectors;
+
+import javax.xml.parsers.SAXParser;
+import javax.xml.parsers.SAXParserFactory;
+
+import org.eclipse.core.resources.IFile;
+import org.eclipse.emf.common.util.URI;
+import org.eclipse.papyrus.infra.emf.Activator;
+import org.eclipse.papyrus.infra.emf.resource.index.IWorkspaceModelIndexProvider;
+import org.eclipse.papyrus.infra.emf.resource.index.WorkspaceModelIndex;
+import org.eclipse.papyrus.infra.emf.resource.index.WorkspaceModelIndex.PersistentIndexHandler;
+import org.xml.sax.helpers.DefaultHandler;
+
+import com.google.common.util.concurrent.ListenableFuture;
+
+/**
+ * An index of cross-resource references in the workspace.
+ */
+public class CrossReferenceIndex extends AbstractCrossReferenceIndex {
+
+ private static final CrossReferenceIndex INSTANCE = new CrossReferenceIndex();
+
+ private final WorkspaceModelIndex<CrossReferencedFile> index;
+
+ /**
+ * Not instantiable by clients.
+ */
+ private CrossReferenceIndex() {
+ super();
+
+ // TODO: Is there a constant somewhere for the XMI content-type?
+ index = new WorkspaceModelIndex<CrossReferencedFile>(
+ "papyrusCrossRefs", //$NON-NLS-1$
+ "org.eclipse.emf.ecore.xmi", //$NON-NLS-1$
+ null, indexer(), MAX_INDEX_JOBS);
+ }
+
+ public void dispose() {
+ index.dispose();
+ }
+
+ public static CrossReferenceIndex getInstance() {
+ return INSTANCE;
+ }
+
+ //
+ // Indexing
+ //
+
+ <V> ListenableFuture<V> afterIndex(Callable<V> callable) {
+ return index.afterIndex(callable);
+ }
+
+ private void runIndexHandler(IFile file, URI resourceURI, DefaultHandler handler) {
+ try (InputStream input = file.getContents()) {
+ SAXParserFactory factory = SAXParserFactory.newInstance();
+ factory.setValidating(false);
+ factory.setNamespaceAware(true);
+ SAXParser parser = factory.newSAXParser();
+
+ parser.parse(input, handler, resourceURI.toString());
+ } catch (Exception e) {
+ Activator.log.error("Exception in indexing resource", e); //$NON-NLS-1$
+ }
+ }
+
+ private boolean indexResource(IFile file, CrossReferencedFile index) {
+ boolean result = true;
+
+ final URI resourceURI = URI.createPlatformResourceURI(file.getFullPath().toString(), true);
+
+ synchronized (sync) {
+ // unindex the resource
+ unindexResource(file);
+
+ // update the forward mapping
+ resourceToShards.putAll(resourceURI, index.getShards());
+ outgoingReferences.putAll(resourceURI, index.getCrossReferences());
+
+ // and the reverse mapping
+ for (URI next : index.getShards()) {
+ shardToParents.put(next, resourceURI);
+ }
+ for (URI next : index.getCrossReferences()) {
+ incomingReferences.put(next, resourceURI);
+ }
+
+ // Is it actually a shard style? (we index all cross-resource containment)
+ setShard(resourceURI, index.isShard());
+ }
+
+ return result;
+ }
+
+ private CrossReferencedFile indexResource(IFile file) {
+ final URI resourceURI = URI.createPlatformResourceURI(file.getFullPath().toString(), true);
+
+ CrossReferenceIndexHandler handler = new CrossReferenceIndexHandler(resourceURI);
+ runIndexHandler(file, resourceURI, handler);
+
+ CrossReferencedFile result = new CrossReferencedFile(handler);
+ indexResource(file, result);
+
+ return result;
+ }
+
+ private void unindexResource(IFile file) {
+ final URI resourceURI = URI.createPlatformResourceURI(file.getFullPath().toString(), true);
+
+ synchronized (sync) {
+ // purge the aggregates (for model-set "resource without URI")
+ aggregateResourceToShards = null;
+ aggregateShardToParents = null;
+ aggregateOutgoingReferences = null;
+ aggregateIncomingReferences = null;
+ setShard(resourceURI, false);
+
+ // And remove all traces of this resource
+ resourceToShards.removeAll(resourceURI);
+ outgoingReferences.removeAll(resourceURI);
+
+ // the multimap's entry collection that underlies the key-set
+ // is modified as we go, so take a safe copy of the keys
+ for (URI next : new ArrayList<>(shardToParents.keySet())) {
+ shardToParents.remove(next, resourceURI);
+ }
+ for (URI next : new ArrayList<>(incomingReferences.keySet())) {
+ incomingReferences.remove(next, resourceURI);
+ }
+ }
+ }
+
+ private PersistentIndexHandler<CrossReferencedFile> indexer() {
+ return new PersistentIndexHandler<CrossReferencedFile>() {
+ @Override
+ public CrossReferencedFile index(IFile file) {
+ return indexResource(file);
+ }
+
+ @Override
+ public void unindex(IFile file) {
+ CrossReferenceIndex.this.unindexResource(file);
+ }
+
+ @Override
+ public boolean load(IFile file, CrossReferencedFile index) {
+ return CrossReferenceIndex.this.indexResource(file, index);
+ }
+ };
+ }
+
+ //
+ // Nested types
+ //
+
+ static final class CrossReferencedFile implements Serializable {
+ private static final long serialVersionUID = 1L;
+
+ private boolean isShard;
+ private Set<String> crossReferences;
+ private Set<String> shards;
+
+ private transient Set<URI> crossReferenceURIs;
+ private transient Set<URI> shardURIs;
+
+ CrossReferencedFile(CrossReferenceIndexHandler handler) {
+ super();
+
+ this.isShard = handler.isShard();
+ this.crossReferences = handler.getCrossReferences();
+ this.shards = handler.getShards();
+ }
+
+ boolean isShard() {
+ return isShard;
+ }
+
+ Set<URI> getCrossReferences() {
+ if (crossReferenceURIs == null) {
+ crossReferenceURIs = crossReferences.stream()
+ .map(URI::createURI)
+ .collect(Collectors.toSet());
+ }
+ return crossReferenceURIs;
+ }
+
+ Set<URI> getShards() {
+ if (shardURIs == null) {
+ shardURIs = shards.stream()
+ .map(URI::createURI)
+ .collect(Collectors.toSet());
+ }
+ return shardURIs;
+ }
+ }
+
+ /**
+ * Index provider on the extension point.
+ */
+ public static final class IndexProvider implements IWorkspaceModelIndexProvider {
+ @Override
+ public WorkspaceModelIndex<?> get() {
+ return CrossReferenceIndex.INSTANCE.index;
+ }
+ }
+}
diff --git a/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/internal/resource/CrossReferenceIndexHandler.java b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/internal/resource/CrossReferenceIndexHandler.java
new file mode 100644
index 00000000000..4b6dbe96778
--- /dev/null
+++ b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/internal/resource/CrossReferenceIndexHandler.java
@@ -0,0 +1,270 @@
+/*****************************************************************************
+ * Copyright (c) 2016 Christian W. Damus and others.
+ *
+ * All rights reserved. This program and the accompanying materials
+ * are made available under the terms of the Eclipse Public License v1.0
+ * which accompanies this distribution, and is available at
+ * http://www.eclipse.org/legal/epl-v10.html
+ *
+ * Contributors:
+ * Christian W. Damus - Initial API and implementation
+ *
+ *****************************************************************************/
+
+package org.eclipse.papyrus.infra.emf.internal.resource;
+
+import static org.eclipse.papyrus.infra.tools.util.TypeUtils.as;
+
+import java.util.Iterator;
+import java.util.Set;
+
+import org.eclipse.emf.common.util.URI;
+import org.eclipse.emf.ecore.EClass;
+import org.eclipse.emf.ecore.EPackage;
+import org.eclipse.emf.ecore.EReference;
+import org.eclipse.emf.ecore.EStructuralFeature;
+import org.eclipse.emf.ecore.EcorePackage;
+import org.xml.sax.Attributes;
+import org.xml.sax.SAXException;
+import org.xml.sax.helpers.DefaultHandler;
+
+import com.google.common.base.Splitter;
+import com.google.common.base.Strings;
+import com.google.common.collect.BiMap;
+import com.google.common.collect.HashBiMap;
+import com.google.common.collect.Sets;
+
+/**
+ * XML parsing handler for extraction of resource cross-reference topology.
+ */
+public class CrossReferenceIndexHandler extends DefaultHandler {
+ private final URI fileURI;
+
+ private final boolean annotationOnly;
+
+ private Set<String> crossReferences = Sets.newHashSet();
+ private XMIElement shard;
+ private Set<String> shards = Sets.newHashSet();
+
+ // The (optional) parent references in the annotation
+ private Set<String> parents = Sets.newHashSet();
+
+ private BiMap<String, String> namespacePrefixes = HashBiMap.create();
+
+ private String xmiContainerQName;
+ private String xmiTypeQName;
+ private String eAnnotationSourceName;
+ private String eAnnotationReferencesName;
+
+ private XMIElement top;
+
+ /**
+ * Initializes me.
+ *
+ * @param fileURI
+ * the URI of the XMI file that I am parsing
+ */
+ public CrossReferenceIndexHandler(final URI fileURI) {
+ this(fileURI, false);
+ }
+
+ /**
+ * Initializes me.
+ *
+ * @param fileURI
+ * the URI of the XMI file that I am parsing
+ * @param annotationOnly
+ * whether we stop parsing as soon as the shard annotation has been processed
+ */
+ public CrossReferenceIndexHandler(URI fileURI, boolean annotationOnly) {
+ this.fileURI = fileURI;
+ this.annotationOnly = annotationOnly;
+ }
+
+ public URI getFileURI() {
+ return fileURI;
+ }
+
+ public Set<String> getCrossReferences() {
+ return crossReferences;
+ }
+
+ public boolean isShard() {
+ return shard != null;
+ }
+
+ public Set<String> getShards() {
+ return shards;
+ }
+
+ public Set<String> getParents() {
+ return parents;
+ }
+
+ @Override
+ public void startPrefixMapping(String prefix, String uri) throws SAXException {
+ namespacePrefixes.put(prefix, uri);
+
+ if ("xmi".equals(prefix)) { //$NON-NLS-1$
+ xmiTypeQName = qname(prefix, "type"); //$NON-NLS-1$
+ xmiContainerQName = qname(prefix, "XMI"); //$NON-NLS-1$
+ eAnnotationSourceName = "source"; //$NON-NLS-1$
+ eAnnotationReferencesName = "references"; //$NON-NLS-1$
+ }
+ }
+
+ protected final String qname(String prefix, String name) {
+ StringBuilder buf = new StringBuilder(prefix.length() + name.length() + 1);
+ return buf.append(prefix).append(':').append(name).toString();
+ }
+
+ @Override
+ public void startElement(String uri, String localName, String qName, Attributes attributes) throws SAXException {
+ push(qName, attributes);
+
+ handleXMIElement(top, attributes);
+ }
+
+ protected final void push(String qName, Attributes attributes) {
+ top = new XMIElement(qName, attributes);
+ }
+
+ protected final XMIElement pop() {
+ XMIElement result = top;
+ if (top != null) {
+ top = top.parent;
+ }
+
+ return result;
+ }
+
+ protected void handleXMIElement(XMIElement element, Attributes attributes) throws SAXException {
+ if (element.getHREF() != null) {
+ URI xref = element.getHREF().trimFragment();
+
+ // Don't index internal references
+ if (!xref.equals(fileURI)) {
+ if (element.isContainment()) {
+ // Cross-resource containment is a shard relationship
+ shards.add(xref.toString());
+ } else if (isShard() && (element.parent == shard) && element.isRole(eAnnotationReferencesName)) {
+ // Handle shard parent resource reference. This is
+ // *not* a regular cross-resource reference
+ parents.add(xref.toString());
+ } else {
+ // Regular cross-resource reference
+ crossReferences.add(xref.toString());
+ }
+ }
+ } else if (element.isAnnotation()) {
+ String source = attributes.getValue(eAnnotationSourceName);
+ if (AbstractCrossReferenceIndex.SHARD_ANNOTATION_SOURCE.equals(source)) {
+ // This is a shard
+ shard = element;
+ }
+ }
+ }
+
+ @Override
+ public void endElement(String uri, String localName, String qName) throws SAXException {
+ XMIElement ended = pop();
+
+ if (annotationOnly && isShard() && (ended == shard)) {
+ // We have finished with shard linkage
+ throw new StopParsing();
+ }
+ }
+
+ //
+ // Nested types
+ //
+
+ protected final class XMIElement {
+ final XMIElement parent;
+
+ final String type;
+ final String role;
+ final String href;
+
+ private EClass eclass;
+
+ XMIElement(String qName, Attributes attributes) {
+ parent = top;
+
+ if ((parent == null) || parent.isXMIContainer()) {
+ // It's actually a type name
+ this.role = null;
+ this.type = qName;
+ } else {
+ this.role = qName;
+ this.type = attributes.getValue(xmiTypeQName);
+ }
+
+ this.href = attributes.getValue("href"); //$NON-NLS-1$
+ }
+
+ /** Am I the {@code xmi:XMI} container? */
+ boolean isXMIContainer() {
+ return (role == null) && ((type == null) || type.equals(xmiContainerQName));
+ }
+
+ boolean isRoot() {
+ return (parent == null) || parent.isXMIContainer();
+ }
+
+ boolean isRole(String roleName) {
+ return roleName.equals(role);
+ }
+
+ URI getHREF() {
+ return Strings.isNullOrEmpty(href) ? null : URI.createURI(href).resolve(fileURI);
+ }
+
+ boolean isAnnotation() {
+ return getEClass() == EcorePackage.Literals.EANNOTATION;
+ }
+
+ boolean isContainment() {
+ boolean result = false;
+
+ if (!isRoot()) {
+ EStructuralFeature feature = parent.getFeature(this.role);
+ result = (feature instanceof EReference)
+ && ((EReference) feature).isContainment();
+ }
+
+ return result;
+ }
+
+ EStructuralFeature getFeature(String role) {
+ EClass eclass = getEClass();
+
+ return (eclass == null) ? null : eclass.getEStructuralFeature(role);
+ }
+
+ EClass getEClass() {
+ if (eclass == null) {
+ if (type != null) {
+ Iterator<String> parts = Splitter.on(':').split(type).iterator();
+ String ns = namespacePrefixes.get(parts.next());
+ if (ns != null) {
+ EPackage epackage = EPackage.Registry.INSTANCE.getEPackage(ns);
+ if (epackage != null) {
+ eclass = as(epackage.getEClassifier(parts.next()), EClass.class);
+ }
+ }
+ } else if (parent != null) {
+ EClass parentEClass = parent.getEClass();
+ if (parentEClass != null) {
+ EReference ref = as(parentEClass.getEStructuralFeature(role), EReference.class);
+ if (ref != null) {
+ eclass = ref.getEReferenceType();
+ }
+ }
+ }
+ }
+
+ return eclass;
+ }
+ }
+}
diff --git a/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/internal/resource/InternalIndexUtil.java b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/internal/resource/InternalIndexUtil.java
new file mode 100644
index 00000000000..7a36b289094
--- /dev/null
+++ b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/internal/resource/InternalIndexUtil.java
@@ -0,0 +1,73 @@
+/*****************************************************************************
+ * Copyright (c) 2016 Christian W. Damus and others.
+ *
+ * All rights reserved. This program and the accompanying materials
+ * are made available under the terms of the Eclipse Public License v1.0
+ * which accompanies this distribution, and is available at
+ * http://www.eclipse.org/legal/epl-v10.html
+ *
+ * Contributors:
+ * Christian W. Damus - Initial API and implementation
+ *
+ *****************************************************************************/
+
+package org.eclipse.papyrus.infra.emf.internal.resource;
+
+import java.util.Collections;
+import java.util.Objects;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+import org.eclipse.emf.ecore.resource.ResourceSet;
+import org.eclipse.papyrus.infra.core.internal.language.ILanguageModel;
+import org.eclipse.papyrus.infra.core.language.ILanguageService;
+import org.eclipse.papyrus.infra.core.resource.ModelSet;
+
+/**
+ * Miscellaneous internal utilities supporting or using the model indexing facilities.
+ */
+public class InternalIndexUtil {
+
+ /**
+ * Not instantiable by clients.
+ */
+ private InternalIndexUtil() {
+ super();
+ }
+
+ /**
+ * Determine the resource file extensions that contain "semantic model" content,
+ * using heuristics if necessary to make a best guess.
+ *
+ * @param resourceSet
+ * a resource set
+ * @return the set of file extensions for resources that are expected to contain
+ * semantic model content that is interesting to index
+ */
+ // in which the shard loading is important
+ public static Set<String> getSemanticModelFileExtensions(ResourceSet resourceSet) {
+ Set<String> result = null;
+
+ try {
+ if (resourceSet instanceof ModelSet) {
+ ILanguageService.getLanguageModels((ModelSet) resourceSet).stream()
+ .map(m -> m.getAdapter(ILanguageModel.class))
+ .filter(Objects::nonNull) // Not all models provide the adapter
+ .map(ILanguageModel::getModelFileExtension)
+ .filter(Objects::nonNull) // They really should provide this, though
+ .collect(Collectors.toSet());
+ }
+ } catch (Exception e) {
+ // We seem not to have the Language Service? That's fine
+ } catch (LinkageError e) {
+ // We seem to be operating without the Eclipse/OSGi run-time? That's fine
+ }
+
+ if (result == null) {
+ // Best guess for common Papyrus applications
+ result = Collections.singleton("uml"); //$NON-NLS-1$
+ }
+
+ return result;
+ }
+}
diff --git a/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/internal/resource/OnDemandCrossReferenceIndex.java b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/internal/resource/OnDemandCrossReferenceIndex.java
new file mode 100644
index 00000000000..6b139df2a0b
--- /dev/null
+++ b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/internal/resource/OnDemandCrossReferenceIndex.java
@@ -0,0 +1,182 @@
+/*****************************************************************************
+ * Copyright (c) 2016 Christian W. Damus and others.
+ *
+ * All rights reserved. This program and the accompanying materials
+ * are made available under the terms of the Eclipse Public License v1.0
+ * which accompanies this distribution, and is available at
+ * http://www.eclipse.org/legal/epl-v10.html
+ *
+ * Contributors:
+ * Christian W. Damus - Initial API and implementation
+ *
+ *****************************************************************************/
+
+package org.eclipse.papyrus.infra.emf.internal.resource;
+
+import java.io.InputStream;
+import java.util.Queue;
+import java.util.Set;
+import java.util.concurrent.Callable;
+import java.util.concurrent.SynchronousQueue;
+import java.util.concurrent.ThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.function.Predicate;
+import java.util.stream.Collectors;
+
+import javax.xml.parsers.SAXParser;
+import javax.xml.parsers.SAXParserFactory;
+
+import org.eclipse.emf.common.util.URI;
+import org.eclipse.emf.ecore.resource.ResourceSet;
+import org.eclipse.emf.ecore.resource.URIConverter;
+import org.eclipse.papyrus.infra.emf.Activator;
+import org.eclipse.papyrus.infra.emf.resource.ICrossReferenceIndex;
+import org.xml.sax.InputSource;
+
+import com.google.common.collect.ImmutableSetMultimap;
+import com.google.common.collect.Lists;
+import com.google.common.collect.SetMultimap;
+import com.google.common.util.concurrent.ListenableFuture;
+import com.google.common.util.concurrent.ListeningExecutorService;
+import com.google.common.util.concurrent.MoreExecutors;
+
+/**
+ * An implementation of the {@link ICrossReferenceIndex Cross-Reference Index} API
+ * that determines shard relationships on-the-fly from pre-parsing of shard
+ * annotations references, where they are available. It does no other cross-reference
+ * indexing than this.
+ */
+public class OnDemandCrossReferenceIndex extends AbstractCrossReferenceIndex {
+
+ private static final ThreadGroup threadGroup = new ThreadGroup("XRefIndex"); //$NON-NLS-1$
+ private static final AtomicInteger threadCounter = new AtomicInteger();
+
+ private static final ListeningExecutorService executor = MoreExecutors.listeningDecorator(
+ new ThreadPoolExecutor(0, MAX_INDEX_JOBS, 60L, TimeUnit.SECONDS,
+ new SynchronousQueue<>(),
+ OnDemandCrossReferenceIndex::createThread));
+
+ private final Set<String> modelResourceFileExtensions;
+
+ /**
+ * Initializes me with the resource set in which I will index resources.
+ *
+ * @param resourceSet
+ * the contextual resource set, or {@code null} if none and
+ * the default heuristic- or otherwise-determined resources
+ * should be indexed on demand
+ */
+ public OnDemandCrossReferenceIndex(ResourceSet resourceSet) {
+ this(InternalIndexUtil.getSemanticModelFileExtensions(resourceSet));
+ }
+
+ /**
+ * Initializes me with the file extensions of resources that I will index.
+ *
+ * @param resourceFileExtensions
+ * the file extensions of resources to index on demand
+ */
+ public OnDemandCrossReferenceIndex(Set<String> resourceFileExtensions) {
+ super();
+
+ this.modelResourceFileExtensions = resourceFileExtensions;
+ }
+
+ private static Thread createThread(Runnable run) {
+ Thread result = new Thread(threadGroup, run, "XRefIndex-" + threadCounter.incrementAndGet());
+ result.setDaemon(true);
+ return result;
+ }
+
+ @Override
+ boolean isShard0(URI uriWithoutExtension) {
+ // Hook for on-demand indexing
+
+ // If the key isn't even there, we know that no interesting extension is
+ if (!shards.containsKey(uriWithoutExtension) ||
+ !intersects(shards.get(uriWithoutExtension), modelResourceFileExtensions)) {
+ index(uriWithoutExtension.appendFileExtension("uml"));
+ }
+
+ return super.isShard0(uriWithoutExtension);
+ }
+
+ private static <T> boolean intersects(Set<? extends T> a, Set<? extends T> b) {
+ return !a.isEmpty() && !b.isEmpty() && a.stream().anyMatch(b::contains);
+ }
+
+ @Override
+ Callable<SetMultimap<URI, URI>> getShardsCallable() {
+ // We don't parse on-the-fly for child shards; it requires scanning
+ // the whole resource
+ return () -> ImmutableSetMultimap.of();
+ }
+
+ @Override
+ Callable<SetMultimap<URI, URI>> getOutgoingCrossReferencesCallable() {
+ // We don't parse on-the-fly for cross-references; it requires scanning
+ // the whole resource
+ return () -> ImmutableSetMultimap.of();
+ }
+
+ @Override
+ Callable<SetMultimap<URI, URI>> getIncomingCrossReferencesCallable() {
+ // We don't parse on-the-fly for cross-references; it requires scanning
+ // the whole resource
+ return () -> ImmutableSetMultimap.of();
+ }
+
+ //
+ // Indexing
+ //
+
+ @Override
+ <V> ListenableFuture<V> afterIndex(Callable<V> callable) {
+ return executor.submit(callable);
+ }
+
+ void index(URI resourceURI) {
+ // Index this resource
+ Queue<URI> toIndex = Lists.newLinkedList();
+ toIndex.offer(resourceURI);
+
+ for (URI next = toIndex.poll(); next != null; next = toIndex.poll()) {
+ doIndex(next);
+
+ // And then, breadth-first, its parents that aren't already indexed
+ shardToParents.get(next).stream()
+ .filter(((Predicate<URI>) shards::containsKey).negate())
+ .forEach(toIndex::offer);
+ }
+ }
+
+ private void doIndex(URI resourceURI) {
+ // Only parse as far as the shard annotation, which occurs near the top
+ CrossReferenceIndexHandler handler = new CrossReferenceIndexHandler(resourceURI, true);
+
+ try (InputStream input = URIConverter.INSTANCE.createInputStream(resourceURI)) {
+ InputSource source = new InputSource(input);
+ SAXParserFactory factory = SAXParserFactory.newInstance();
+ factory.setValidating(false);
+ factory.setNamespaceAware(true);
+ SAXParser parser = factory.newSAXParser();
+
+ parser.parse(source, handler);
+ } catch (StopParsing stop) {
+ // Normal
+ } catch (Exception e) {
+ Activator.log.error("Failed to scan model resource for parent reference.", e); //$NON-NLS-1$
+ }
+
+ // Clear the aggregate map because we now have updates to include
+ aggregateShardToParents = null;
+
+ setShard(resourceURI, handler.isShard());
+ Set<URI> parents = handler.getParents().stream()
+ .map(URI::createURI)
+ .collect(Collectors.toSet());
+ shardToParents.putAll(resourceURI, parents);
+ }
+
+}
diff --git a/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/internal/resource/StopParsing.java b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/internal/resource/StopParsing.java
new file mode 100644
index 00000000000..4ef170d6ad4
--- /dev/null
+++ b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/internal/resource/StopParsing.java
@@ -0,0 +1,30 @@
+/*****************************************************************************
+ * Copyright (c) 2016 Christian W. Damus and others.
+ *
+ * All rights reserved. This program and the accompanying materials
+ * are made available under the terms of the Eclipse Public License v1.0
+ * which accompanies this distribution, and is available at
+ * http://www.eclipse.org/legal/epl-v10.html
+ *
+ * Contributors:
+ * Christian W. Damus - Initial API and implementation
+ *
+ *****************************************************************************/
+
+package org.eclipse.papyrus.infra.emf.internal.resource;
+
+/**
+ * A simple, recognizable throwable to bail out of XML parsing early.
+ */
+class StopParsing extends Error {
+
+ private static final long serialVersionUID = 1L;
+
+ /**
+ * Initializes me.
+ */
+ public StopParsing() {
+ super();
+ }
+
+}
diff --git a/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/internal/resource/index/IIndexSaveParticipant.java b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/internal/resource/index/IIndexSaveParticipant.java
new file mode 100644
index 00000000000..ab570a6bb22
--- /dev/null
+++ b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/internal/resource/index/IIndexSaveParticipant.java
@@ -0,0 +1,44 @@
+/*****************************************************************************
+ * Copyright (c) 2016 Christian W. Damus and others.
+ *
+ * All rights reserved. This program and the accompanying materials
+ * are made available under the terms of the Eclipse Public License v1.0
+ * which accompanies this distribution, and is available at
+ * http://www.eclipse.org/legal/epl-v10.html
+ *
+ * Contributors:
+ * Christian W. Damus - Initial API and implementation
+ *
+ *****************************************************************************/
+
+package org.eclipse.papyrus.infra.emf.internal.resource.index;
+
+import java.io.IOException;
+import java.io.OutputStream;
+
+import org.eclipse.core.resources.ISaveParticipant;
+import org.eclipse.core.runtime.CoreException;
+import org.eclipse.papyrus.infra.emf.resource.index.WorkspaceModelIndex;
+
+/**
+ * Protocol for an extension of the plug-in's {@link ISaveParticipant}
+ * that saves the current state of a {@link WorkspaceModelIndex}.
+ */
+public interface IIndexSaveParticipant {
+ /**
+ * Saves an {@code index} to a file.
+ *
+ * @param index
+ * the index to save
+ * @param store
+ * the output stream on which to save it. The caller may choose to
+ * {@link OutputStream#close() close} this stream but is not
+ * required to
+ *
+ * @throws IOException
+ * on failure to write to the {@code store}
+ * @throws CoreException
+ * on failure to save the {@code index}
+ */
+ void save(WorkspaceModelIndex<?> index, OutputStream output) throws IOException, CoreException;
+}
diff --git a/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/internal/resource/index/IndexManager.java b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/internal/resource/index/IndexManager.java
new file mode 100644
index 00000000000..f13ff6e6830
--- /dev/null
+++ b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/internal/resource/index/IndexManager.java
@@ -0,0 +1,1075 @@
+/*****************************************************************************
+ * Copyright (c) 2014, 2016 Christian W. Damus and others.
+ *
+ * All rights reserved. This program and the accompanying materials
+ * are made available under the terms of the Eclipse Public License v1.0
+ * which accompanies this distribution, and is available at
+ * http://www.eclipse.org/legal/epl-v10.html
+ *
+ * Contributors:
+ * Christian W. Damus - Initial API and implementation
+ *
+ *****************************************************************************/
+
+package org.eclipse.papyrus.infra.emf.internal.resource.index;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Deque;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.CopyOnWriteArrayList;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Future;
+import java.util.concurrent.Semaphore;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.concurrent.locks.Condition;
+import java.util.concurrent.locks.Lock;
+import java.util.concurrent.locks.ReentrantLock;
+
+import org.eclipse.core.resources.IFile;
+import org.eclipse.core.resources.IProject;
+import org.eclipse.core.resources.IResource;
+import org.eclipse.core.resources.IResourceChangeEvent;
+import org.eclipse.core.resources.IResourceChangeListener;
+import org.eclipse.core.resources.IResourceDelta;
+import org.eclipse.core.resources.IResourceDeltaVisitor;
+import org.eclipse.core.resources.IResourceVisitor;
+import org.eclipse.core.resources.IWorkspaceRoot;
+import org.eclipse.core.resources.ResourcesPlugin;
+import org.eclipse.core.runtime.CoreException;
+import org.eclipse.core.runtime.IConfigurationElement;
+import org.eclipse.core.runtime.IProgressMonitor;
+import org.eclipse.core.runtime.IStatus;
+import org.eclipse.core.runtime.Platform;
+import org.eclipse.core.runtime.QualifiedName;
+import org.eclipse.core.runtime.Status;
+import org.eclipse.core.runtime.SubMonitor;
+import org.eclipse.core.runtime.content.IContentType;
+import org.eclipse.core.runtime.content.IContentTypeManager;
+import org.eclipse.core.runtime.jobs.IJobChangeEvent;
+import org.eclipse.core.runtime.jobs.IJobChangeListener;
+import org.eclipse.core.runtime.jobs.Job;
+import org.eclipse.core.runtime.jobs.JobChangeAdapter;
+import org.eclipse.papyrus.infra.core.utils.JobBasedFuture;
+import org.eclipse.papyrus.infra.core.utils.JobExecutorService;
+import org.eclipse.papyrus.infra.emf.Activator;
+import org.eclipse.papyrus.infra.emf.resource.index.IWorkspaceModelIndexListener;
+import org.eclipse.papyrus.infra.emf.resource.index.IWorkspaceModelIndexProvider;
+import org.eclipse.papyrus.infra.emf.resource.index.WorkspaceModelIndex;
+import org.eclipse.papyrus.infra.emf.resource.index.WorkspaceModelIndexEvent;
+import org.eclipse.papyrus.infra.tools.util.ReferenceCounted;
+
+import com.google.common.base.Objects;
+import com.google.common.collect.ArrayListMultimap;
+import com.google.common.collect.Iterables;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import com.google.common.collect.Multimap;
+import com.google.common.collect.Queues;
+import com.google.common.util.concurrent.Futures;
+import com.google.common.util.concurrent.ListenableFuture;
+
+/**
+ * A controller of the indexing process for {@link WorkspaceModelIndex}s,
+ * including initial loading of an index and invocation of incremental
+ * indexing as resources in the workspace change.
+ */
+public class IndexManager {
+ private static final int MAX_INDEX_RETRIES = 3;
+
+ private static final IndexManager INSTANCE = new IndexManager();
+
+ private final IWorkspaceRoot wsRoot = ResourcesPlugin.getWorkspace().getRoot();
+ private final IResourceChangeListener workspaceListener = new WorkspaceListener();
+
+ private final Map<IProject, AbstractIndexJob> activeJobs = Maps.newHashMap();
+ private final ContentTypeService contentTypeService;
+
+ private Map<QualifiedName, InternalModelIndex> indices;
+ private JobWrangler jobWrangler;
+ private final CopyOnWriteArrayList<IndexListener> listeners = new CopyOnWriteArrayList<>();
+
+ static {
+ // This cannot be done in the constructor because indices that I load
+ // depend on the INSTANCE field already being set
+ INSTANCE.startManager();
+ }
+
+ public IndexManager() {
+ super();
+
+ contentTypeService = ContentTypeService.getInstance();
+ }
+
+ public static IndexManager getInstance() {
+ return INSTANCE;
+ }
+
+ public void dispose() {
+ if (indices != null) {
+ wsRoot.getWorkspace().removeResourceChangeListener(workspaceListener);
+ Job.getJobManager().cancel(this);
+
+ indices.values().forEach(InternalModelIndex::dispose);
+ // don't null the 'indices' to prevent starting again
+
+ ContentTypeService.dispose(contentTypeService);
+ }
+ }
+
+ public void startManager() {
+ if (indices != null) {
+ throw new IllegalStateException("index manager already started"); //$NON-NLS-1$
+ }
+
+ // Load our indices and find out from them how many
+ // jobs we need make available
+ indices = loadIndices();
+ int maxConcurrentJobs = indices.values().stream()
+ .mapToInt(InternalModelIndex::getMaxIndexJobs)
+ .max()
+ .orElse(5);
+ jobWrangler = new JobWrangler(maxConcurrentJobs);
+
+ // Start the indices now
+ indices.values().forEach(this::startIndex);
+
+ // And load or index from scratch
+ index(Arrays.asList(wsRoot.getProjects()));
+ wsRoot.getWorkspace().addResourceChangeListener(workspaceListener, IResourceChangeEvent.POST_CHANGE);
+ }
+
+ private void startIndex(InternalModelIndex index) {
+ index.start(this);
+ }
+
+ protected Map<QualifiedName, InternalModelIndex> loadIndices() {
+ Map<QualifiedName, InternalModelIndex> result = Maps.newHashMap();
+
+ for (IConfigurationElement config : Platform.getExtensionRegistry().getConfigurationElementsFor(Activator.PLUGIN_ID, "index")) { //$NON-NLS-1$
+ if ("indexProvider".equals(config.getName())) { //$NON-NLS-1$
+ try {
+ IWorkspaceModelIndexProvider provider = (IWorkspaceModelIndexProvider) config.createExecutableExtension("class"); //$NON-NLS-1$
+ WorkspaceModelIndex<?> index = provider.get();
+
+ if (index == null) {
+ Activator.log.warn("No index provided by " + config.getContributor().getName()); //$NON-NLS-1$
+ } else {
+ QualifiedName key = index.getIndexKey();
+ if (key == null) {
+ Activator.log.warn("Index has no key and will be ignored: " + index); //$NON-NLS-1$
+ } else {
+ InternalModelIndex internal = index;
+ // Ensure that the index can load classes from its
+ // persistent store that are defined in its owner's
+ // bundle
+ internal.setOwnerClassLoader(provider.getClass().getClassLoader());
+ result.put(key, internal);
+ }
+ }
+ } catch (ClassCastException e) {
+ Activator.log.error("Expected IWorkspaceModelIndexProvider in " + config.getContributor().getName(), e); //$NON-NLS-1$
+ } catch (CoreException e) {
+ Activator.log.log(e.getStatus());
+ } catch (Exception e) {
+ Activator.log.error("Failed to obtain index from provider in " + config.getContributor().getName(), e); //$NON-NLS-1$
+ }
+ }
+ }
+
+ return result;
+ }
+
+ IContentType[] getContentTypes(IFile file) {
+ return contentTypeService.getContentTypes(file);
+ }
+
+ /**
+ * Obtains an asynchronous future result that is scheduled to run after
+ * any pending indexing work has completed.
+ *
+ * @param index
+ * the index that is making the request
+ * @param callable
+ * the operation to schedule
+ *
+ * @return the future result of the operation
+ */
+ <V> ListenableFuture<V> afterIndex(InternalModelIndex index, Callable<V> callable) {
+ ListenableFuture<V> result;
+
+ if (Job.getJobManager().find(this).length == 0) {
+ // Result is available now
+ try {
+ result = Futures.immediateFuture(callable.call());
+ } catch (Exception e) {
+ result = Futures.immediateFailedFuture(e);
+ }
+ } else {
+ JobBasedFuture<V> job = new JobBasedFuture<V>("Wait for workspace model index") {
+ {
+ setSystem(true);
+ }
+
+ @Override
+ protected V compute(IProgressMonitor monitor) throws Exception {
+ V result;
+
+ Job.getJobManager().join(IndexManager.this, monitor);
+ result = callable.call();
+
+ return result;
+ }
+ };
+ job.schedule();
+ result = job;
+ }
+
+ return result;
+ }
+
+ void index(Collection<? extends IProject> projects) {
+ List<IndexProjectJob> jobs = Lists.newArrayListWithCapacity(projects.size());
+ for (IProject next : projects) {
+ jobs.add(new IndexProjectJob(next));
+ }
+ schedule(jobs);
+ }
+
+ void index(IProject project) {
+ schedule(new IndexProjectJob(project));
+ }
+
+ void process(IFile file) throws CoreException {
+ IProject project = file.getProject();
+
+ safeIterateIndices(index -> {
+ if (index.match(file)) {
+ index.process(file);
+ } else {
+ index.remove(project, file);
+ }
+ });
+ }
+
+ private void safeIterateIndices(IndexAction action) throws CoreException {
+ CoreException exception = null;
+
+ for (InternalModelIndex index : indices.values()) {
+ try {
+ action.apply(index);
+ } catch (CoreException e) {
+ if (exception != null) {
+ exception = e;
+ }
+ }
+ }
+
+ if (exception != null) {
+ throw exception;
+ }
+ }
+
+ void remove(IProject project, IFile file) throws CoreException {
+ safeIterateIndices(index -> index.remove(project, file));
+ }
+
+ void remove(IProject project) throws CoreException {
+ safeIterateIndices(index -> index.remove(project));
+ }
+
+ ReindexProjectJob reindex(IProject project, Collection<? extends IndexDelta> deltas) {
+ ReindexProjectJob result = null;
+
+ synchronized (activeJobs) {
+ AbstractIndexJob active = activeJobs.get(project);
+
+ if (active != null) {
+ switch (active.kind()) {
+ case REINDEX:
+ ReindexProjectJob reindex = (ReindexProjectJob) active;
+ reindex.addDeltas(deltas);
+ break;
+ case INDEX:
+ IndexProjectJob index = (IndexProjectJob) active;
+ ReindexProjectJob followup = index.getFollowup();
+ if (followup != null) {
+ followup.addDeltas(deltas);
+ } else {
+ followup = new ReindexProjectJob(project, deltas);
+ index.setFollowup(followup);
+ }
+ break;
+ case MASTER:
+ throw new IllegalStateException("Master job is in the active table."); //$NON-NLS-1$
+ }
+ } else {
+ // No active job. We'll need a new one
+ result = new ReindexProjectJob(project, deltas);
+ }
+ }
+
+ return result;
+ }
+
+ IResourceVisitor getWorkspaceVisitor(final IProgressMonitor monitor) {
+ return new IResourceVisitor() {
+
+ @Override
+ public boolean visit(IResource resource) throws CoreException {
+ if (resource.getType() == IResource.FILE) {
+ process((IFile) resource);
+ }
+
+ return !monitor.isCanceled();
+ }
+ };
+ }
+
+ private void schedule(Collection<? extends AbstractIndexJob> jobs) {
+ // Synchronize on the active jobs because this potentially alters the wrangler's follow-up job
+ synchronized (activeJobs) {
+ jobWrangler.add(jobs);
+ }
+ }
+
+ private void schedule(AbstractIndexJob job) {
+ // Synchronize on the active jobs because this potentially alters the wrangler's follow-up job
+ synchronized (activeJobs) {
+ jobWrangler.add(job);
+ }
+ }
+
+ public void addListener(WorkspaceModelIndex<?> index, IWorkspaceModelIndexListener listener) {
+ listeners.addIfAbsent(new IndexListener(index, listener));
+ }
+
+ public void removeListener(WorkspaceModelIndex<?> index, IWorkspaceModelIndexListener listener) {
+ listeners.removeIf(l -> Objects.equal(l.index, index) && Objects.equal(l.listener, listener));
+ }
+
+ private void notifyStarting(AbstractIndexJob indexJob) {
+ if (!listeners.isEmpty()) {
+ Map<WorkspaceModelIndex<?>, WorkspaceModelIndexEvent> events = Maps.newHashMap();
+ java.util.function.Function<WorkspaceModelIndex<?>, WorkspaceModelIndexEvent> eventFunction = index -> {
+ switch (indexJob.kind()) {
+ case INDEX:
+ return new WorkspaceModelIndexEvent(index, WorkspaceModelIndexEvent.ABOUT_TO_CALCULATE, indexJob.getProject());
+ case REINDEX:
+ return new WorkspaceModelIndexEvent(index, WorkspaceModelIndexEvent.ABOUT_TO_RECALCULATE, indexJob.getProject());
+ default:
+ throw new IllegalArgumentException(indexJob.kind().name());
+ }
+ };
+
+ switch (indexJob.kind()) {
+ case INDEX:
+ for (IndexListener next : listeners) {
+ try {
+ next.listener.indexAboutToCalculate(events.computeIfAbsent(next.index, eventFunction));
+ } catch (Exception e) {
+ Activator.log.error("Uncaught exception in index listsner.", e); //$NON-NLS-1$
+ }
+ }
+ break;
+ case REINDEX:
+ for (IndexListener next : listeners) {
+ try {
+ next.listener.indexAboutToRecalculate(events.computeIfAbsent(next.index, eventFunction));
+ } catch (Exception e) {
+ Activator.log.error("Uncaught exception in index listsner.", e); //$NON-NLS-1$
+ }
+ }
+ break;
+ case MASTER:
+ // Pass
+ break;
+ }
+ }
+ }
+
+ private void notifyFinished(AbstractIndexJob indexJob, IStatus status) {
+ if (!listeners.isEmpty()) {
+ if ((status != null) && (status.getSeverity() >= IStatus.ERROR)) {
+ Map<WorkspaceModelIndex<?>, WorkspaceModelIndexEvent> events = Maps.newHashMap();
+ java.util.function.Function<WorkspaceModelIndex<?>, WorkspaceModelIndexEvent> eventFunction = index -> new WorkspaceModelIndexEvent(index, WorkspaceModelIndexEvent.FAILED, indexJob.getProject());
+
+ for (IndexListener next : listeners) {
+ try {
+ next.listener.indexFailed(events.computeIfAbsent(next.index, eventFunction));
+ } catch (Exception e) {
+ Activator.log.error("Uncaught exception in index listsner.", e); //$NON-NLS-1$
+ }
+ }
+ } else {
+ Map<WorkspaceModelIndex<?>, WorkspaceModelIndexEvent> events = Maps.newHashMap();
+ java.util.function.Function<WorkspaceModelIndex<?>, WorkspaceModelIndexEvent> eventFunction = index -> {
+ switch (indexJob.kind()) {
+ case INDEX:
+ return new WorkspaceModelIndexEvent(index, WorkspaceModelIndexEvent.CALCULATED, indexJob.getProject());
+ case REINDEX:
+ return new WorkspaceModelIndexEvent(index, WorkspaceModelIndexEvent.RECALCULATED, indexJob.getProject());
+ default:
+ throw new IllegalArgumentException(indexJob.kind().name());
+ }
+ };
+
+ switch (indexJob.kind()) {
+ case INDEX:
+ for (IndexListener next : listeners) {
+ try {
+ next.listener.indexCalculated(events.computeIfAbsent(next.index, eventFunction));
+ } catch (Exception e) {
+ Activator.log.error("Uncaught exception in index listsner.", e); //$NON-NLS-1$
+ }
+ }
+ break;
+ case REINDEX:
+ for (IndexListener next : listeners) {
+ try {
+ next.listener.indexRecalculated(events.computeIfAbsent(next.index, eventFunction));
+ } catch (Exception e) {
+ Activator.log.error("Uncaught exception in index listsner.", e); //$NON-NLS-1$
+ }
+ }
+ break;
+ case MASTER:
+ // Pass
+ break;
+ }
+ }
+ }
+ }
+
+ //
+ // Nested types
+ //
+
+ private enum JobKind {
+ MASTER, INDEX, REINDEX;
+
+ boolean isSystem() {
+ return this != MASTER;
+ }
+ }
+
+ private abstract class AbstractIndexJob extends Job {
+ private final IProject project;
+
+ private volatile Semaphore permit;
+
+ AbstractIndexJob(String name, IProject project) {
+ this(name, project, true);
+ }
+
+ AbstractIndexJob(String name, IProject project, boolean register) {
+ super(name);
+
+ this.project = project;
+ this.permit = permit;
+
+ if ((project != null) && register) {
+ setRule(project);
+ synchronized (activeJobs) {
+ if (!activeJobs.containsKey(project)) {
+ activeJobs.put(project, this);
+ }
+ }
+ }
+
+ setSystem(kind().isSystem());
+ }
+
+ @Override
+ public boolean belongsTo(Object family) {
+ return family == IndexManager.this;
+ }
+
+ final IProject getProject() {
+ return project;
+ }
+
+ abstract JobKind kind();
+
+ @Override
+ protected final IStatus run(IProgressMonitor monitor) {
+ IStatus result;
+
+ try {
+ result = doRun(monitor);
+ } finally {
+ synchronized (activeJobs) {
+ AbstractIndexJob followup = getFollowup();
+
+ if (project != null) {
+ if (followup == null) {
+ activeJobs.remove(project);
+ } else {
+ activeJobs.put(project, followup);
+ }
+ }
+
+ if (followup != null) {
+ // Kick off the follow-up job
+ IndexManager.this.schedule(followup);
+ }
+ }
+ }
+
+ return result;
+ }
+
+ final Semaphore getPermit() {
+ return permit;
+ }
+
+ final void setPermit(Semaphore permit) {
+ this.permit = permit;
+ }
+
+ protected abstract IStatus doRun(IProgressMonitor monitor);
+
+ protected AbstractIndexJob getFollowup() {
+ return null;
+ }
+ }
+
+ private class JobWrangler extends AbstractIndexJob {
+ private final Lock lock = new ReentrantLock();
+
+ private final Deque<AbstractIndexJob> queue = Queues.newArrayDeque();
+
+ private final AtomicBoolean active = new AtomicBoolean();
+ private final Semaphore indexJobSemaphore;
+
+ private volatile boolean cancelled;
+
+ JobWrangler(int maxConcurrentJobs) {
+ super("Workspace model indexer", null);
+
+ indexJobSemaphore = new Semaphore((maxConcurrentJobs <= 0) ? Integer.MAX_VALUE : maxConcurrentJobs);
+ }
+
+ @Override
+ JobKind kind() {
+ return JobKind.MASTER;
+ }
+
+ void add(AbstractIndexJob job) {
+ lock.lock();
+
+ try {
+ scheduleIfNeeded();
+ queue.add(job);
+ } finally {
+ lock.unlock();
+ }
+ }
+
+ private void scheduleIfNeeded() {
+ if (active.compareAndSet(false, true)) {
+ // I am a new job
+ schedule();
+ }
+ }
+
+ void add(Iterable<? extends AbstractIndexJob> jobs) {
+ lock.lock();
+
+ try {
+ for (AbstractIndexJob next : jobs) {
+ add(next);
+ }
+ } finally {
+ lock.unlock();
+ }
+ }
+
+ @Override
+ protected void canceling() {
+ cancelled = true;
+ getThread().interrupt();
+ }
+
+ @Override
+ protected IStatus doRun(IProgressMonitor progressMonitor) {
+ final AtomicInteger pending = new AtomicInteger(); // How many permits have we issued?
+ final Condition pendingChanged = lock.newCondition();
+
+ final SubMonitor monitor = SubMonitor.convert(progressMonitor, IProgressMonitor.UNKNOWN);
+
+ IStatus result = Status.OK_STATUS;
+
+ IJobChangeListener listener = new JobChangeAdapter() {
+ private final Map<IProject, Integer> retries = Maps.newHashMap();
+
+ private Semaphore getIndexJobPermit(Job job) {
+ return (job instanceof AbstractIndexJob)
+ ? ((AbstractIndexJob) job).getPermit()
+ : null;
+ }
+
+ @Override
+ public void aboutToRun(IJobChangeEvent event) {
+ Job starting = event.getJob();
+
+ if (getIndexJobPermit(starting) == indexJobSemaphore) {
+ // one of mine is starting
+ AbstractIndexJob indexJob = (AbstractIndexJob) starting;
+ notifyStarting(indexJob);
+ }
+ }
+
+ @Override
+ public void done(IJobChangeEvent event) {
+ final Job finished = event.getJob();
+ if (getIndexJobPermit(finished) == indexJobSemaphore) {
+ try {
+ // one of mine has finished
+ AbstractIndexJob indexJob = (AbstractIndexJob) finished;
+ IProject project = indexJob.getProject();
+
+ notifyFinished(indexJob, event.getResult());
+
+ if (project != null) {
+ synchronized (retries) {
+ if ((event.getResult() != null) && (event.getResult().getSeverity() >= IStatus.ERROR)) {
+ // Indexing failed to complete. Need to re-build the index
+ int count = retries.containsKey(project) ? retries.get(project) : 0;
+ if (count++ < MAX_INDEX_RETRIES) {
+ // Only retry up to three times
+ index(project);
+ }
+ retries.put(project, ++count);
+ } else {
+ // Successful re-indexing. Forget the retries
+ retries.remove(project);
+ }
+ }
+ }
+ } finally {
+ // Release this job's permit for the next one in the queue
+ indexJobSemaphore.release();
+
+ // And it's no longer pending
+ pending.decrementAndGet();
+
+ lock.lock();
+ try {
+ pendingChanged.signalAll();
+ } finally {
+ lock.unlock();
+ }
+ }
+ }
+ }
+ };
+
+ getJobManager().addJobChangeListener(listener);
+
+ lock.lock();
+
+ try {
+ out: for (;;) {
+ monitor.setWorkRemaining(queue.size());
+
+ for (AbstractIndexJob next = queue.poll(); next != null; next = queue.poll()) {
+ lock.unlock();
+ try {
+ if (cancelled) {
+ throw new InterruptedException();
+ }
+
+ // Enforce the concurrent jobs limit
+ indexJobSemaphore.acquire();
+ next.setPermit(indexJobSemaphore);
+ pending.incrementAndGet();
+
+ // Now go
+ next.schedule();
+
+ monitor.worked(1);
+ } catch (InterruptedException e) {
+ // In case the interrupted happened some other way
+ cancelled = true;
+
+ // We were cancelled. Push this job back and re-schedule
+ lock.lock();
+ try {
+ queue.addFirst(next);
+ } finally {
+ lock.unlock();
+ }
+ result = Status.CANCEL_STATUS;
+ break out;
+ } finally {
+ lock.lock();
+ }
+ }
+
+ if ((pending.get() <= 0) && queue.isEmpty()) {
+ // Nothing left to wait for
+ break out;
+ } else if (pending.get() > 0) {
+ try {
+ if (cancelled) {
+ throw new InterruptedException();
+ }
+
+ pendingChanged.await();
+ } catch (InterruptedException e) {
+ // In case the interrupted happened some other way
+ cancelled = true;
+
+ // We were cancelled. Re-schedule
+ result = Status.CANCEL_STATUS;
+ break out;
+ }
+ }
+ }
+
+ // We've finished wrangling index jobs, for now
+ } finally {
+ try {
+ // If we were canceled then we re-schedule after a delay to recover
+ if (cancelled) {
+ // We cannot un-cancel a job, so we must replace ourselves with a new job
+ schedule(1000L);
+ cancelled = false;
+ } else {
+ // Don't think we're active any longer
+ active.compareAndSet(true, false);
+
+ // Double-check
+ if (!queue.isEmpty()) {
+ // We'll have to go around again
+ scheduleIfNeeded();
+ }
+ }
+ } finally {
+ lock.unlock();
+ getJobManager().removeJobChangeListener(listener);
+ }
+ }
+
+ return result;
+ }
+ }
+
+ private class IndexProjectJob extends AbstractIndexJob {
+ private ReindexProjectJob followup;
+
+ IndexProjectJob(IProject project) {
+ super("Indexing project " + project.getName(), project);
+ }
+
+ @Override
+ JobKind kind() {
+ return JobKind.INDEX;
+ }
+
+ @Override
+ protected IStatus doRun(IProgressMonitor monitor) {
+ IStatus result = Status.OK_STATUS;
+ final IProject project = getProject();
+
+ monitor.beginTask("Indexing models in project " + project.getName(), IProgressMonitor.UNKNOWN);
+
+ try {
+ if (project.isAccessible()) {
+ project.accept(getWorkspaceVisitor(monitor));
+ } else {
+ remove(project);
+ }
+
+ if (monitor.isCanceled()) {
+ result = Status.CANCEL_STATUS;
+ }
+ } catch (CoreException e) {
+ result = e.getStatus();
+ } finally {
+ monitor.done();
+ }
+
+ return result;
+ }
+
+ void setFollowup(ReindexProjectJob followup) {
+ this.followup = followup;
+ }
+
+ @Override
+ protected ReindexProjectJob getFollowup() {
+ return followup;
+ }
+ }
+
+ private class WorkspaceListener implements IResourceChangeListener {
+ @Override
+ public void resourceChanged(IResourceChangeEvent event) {
+ final Multimap<IProject, IndexDelta> deltas = ArrayListMultimap.create();
+
+ try {
+ event.getDelta().accept(new IResourceDeltaVisitor() {
+
+ @Override
+ public boolean visit(IResourceDelta delta) throws CoreException {
+ if (delta.getResource().getType() == IResource.FILE) {
+ IFile file = (IFile) delta.getResource();
+
+ switch (delta.getKind()) {
+ case IResourceDelta.CHANGED:
+ if ((delta.getFlags() & (IResourceDelta.SYNC | IResourceDelta.CONTENT | IResourceDelta.REPLACED)) != 0) {
+ // Re-index in place
+ deltas.put(file.getProject(), new IndexDelta(file, IndexDelta.DeltaKind.REINDEX));
+ }
+ break;
+ case IResourceDelta.REMOVED:
+ deltas.put(file.getProject(), new IndexDelta(file, IndexDelta.DeltaKind.UNINDEX));
+ break;
+ case IResourceDelta.ADDED:
+ deltas.put(file.getProject(), new IndexDelta(file, IndexDelta.DeltaKind.INDEX));
+ break;
+ }
+ }
+ return true;
+ }
+ });
+ } catch (CoreException e) {
+ Activator.log.error("Failed to analyze resource changes for re-indexing.", e); //$NON-NLS-1$
+ }
+
+ if (!deltas.isEmpty()) {
+ List<ReindexProjectJob> jobs = Lists.newArrayListWithCapacity(deltas.keySet().size());
+ for (IProject next : deltas.keySet()) {
+ ReindexProjectJob reindex = reindex(next, deltas.get(next));
+ if (reindex != null) {
+ jobs.add(reindex);
+ }
+ }
+ schedule(jobs);
+ }
+ }
+ }
+
+ private static final class IndexDelta {
+ private final IFile file;
+
+ private final DeltaKind kind;
+
+ IndexDelta(IFile file, DeltaKind kind) {
+ this.file = file;
+ this.kind = kind;
+ }
+
+ DeltaKind kind() {
+ return kind;
+ }
+
+ IFile file() {
+ return file;
+ }
+
+ //
+ // Nested types
+ //
+
+ enum DeltaKind {
+ INDEX, REINDEX, UNINDEX;
+ }
+ }
+
+ private class ReindexProjectJob extends AbstractIndexJob {
+ private final IProject project;
+ private final ConcurrentLinkedQueue<IndexDelta> deltas;
+
+ ReindexProjectJob(IProject project, Collection<? extends IndexDelta> deltas) {
+ super("Re-indexing project " + project.getName(), project);
+
+ this.project = project;
+ this.deltas = Queues.newConcurrentLinkedQueue(deltas);
+ }
+
+ @Override
+ JobKind kind() {
+ return JobKind.REINDEX;
+ }
+
+ void addDeltas(Iterable<? extends IndexDelta> deltas) {
+ Iterables.addAll(this.deltas, deltas);
+ }
+
+ @Override
+ protected IStatus doRun(IProgressMonitor monitor) {
+ IStatus result = Status.OK_STATUS;
+
+ monitor.beginTask("Re-indexing models in project " + project.getName(), IProgressMonitor.UNKNOWN);
+
+ try {
+ for (IndexDelta next = deltas.poll(); next != null; next = deltas.poll()) {
+ if (monitor.isCanceled()) {
+ result = Status.CANCEL_STATUS;
+ break;
+ }
+
+ try {
+ switch (next.kind()) {
+ case INDEX:
+ case REINDEX:
+ process(next.file());
+ break;
+ case UNINDEX:
+ remove(project, next.file());
+ break;
+ }
+ } catch (CoreException e) {
+ result = e.getStatus();
+ break;
+ } finally {
+ monitor.worked(1);
+ }
+ }
+ } finally {
+ monitor.done();
+ }
+
+ return result;
+ }
+
+ @Override
+ protected AbstractIndexJob getFollowup() {
+ // If I still have work to do, then I am my own follow-up
+ return deltas.isEmpty() ? null : this;
+ }
+ }
+
+ private static final class ContentTypeService extends ReferenceCounted<ContentTypeService> {
+ private static ContentTypeService instance = null;
+
+ private final ExecutorService serialExecution = new JobExecutorService();
+
+ private final IContentTypeManager mgr = Platform.getContentTypeManager();
+
+ private ContentTypeService() {
+ super();
+ }
+
+ synchronized static ContentTypeService getInstance() {
+ ContentTypeService result = instance;
+
+ if (result == null) {
+ result = new ContentTypeService();
+ instance = result;
+ }
+
+ return result.retain();
+ }
+
+ synchronized static void dispose(ContentTypeService service) {
+ service.release();
+ }
+
+ @Override
+ protected void dispose() {
+ serialExecution.shutdownNow();
+
+ if (instance == this) {
+ instance = null;
+ }
+ }
+
+ IContentType[] getContentTypes(final IFile file) {
+ Future<IContentType[]> futureResult = serialExecution.submit(new Callable<IContentType[]>() {
+
+ @Override
+ public IContentType[] call() {
+ IContentType[] result = null;
+ InputStream input = null;
+
+ if (file.isAccessible()) {
+ try {
+ input = file.getContents(true);
+ result = mgr.findContentTypesFor(input, file.getName());
+ } catch (Exception e) {
+ Activator.log.error("Failed to index file " + file.getFullPath(), e); //$NON-NLS-1$
+ } finally {
+ if (input != null) {
+ try {
+ input.close();
+ } catch (IOException e) {
+ Activator.log.error("Failed to close indexed file " + file.getFullPath(), e); //$NON-NLS-1$
+ }
+ }
+ }
+ }
+
+ return result;
+ }
+ });
+
+ return Futures.getUnchecked(futureResult);
+ }
+ }
+
+ @FunctionalInterface
+ private interface IndexAction {
+ void apply(InternalModelIndex index) throws CoreException;
+ }
+
+ private static final class IndexListener {
+ final WorkspaceModelIndex<?> index;
+ final IWorkspaceModelIndexListener listener;
+
+ IndexListener(WorkspaceModelIndex<?> index, IWorkspaceModelIndexListener listener) {
+ super();
+
+ this.index = index;
+ this.listener = listener;
+ }
+
+ @Override
+ public int hashCode() {
+ final int prime = 31;
+ int result = 1;
+ result = prime * result + ((index == null) ? 0 : index.hashCode());
+ result = prime * result + ((listener == null) ? 0 : listener.hashCode());
+ return result;
+ }
+
+ @Override
+ public boolean equals(Object obj) {
+ if (this == obj) {
+ return true;
+ }
+ if (obj == null) {
+ return false;
+ }
+ if (!(obj instanceof IndexListener)) {
+ return false;
+ }
+ IndexListener other = (IndexListener) obj;
+ if (index == null) {
+ if (other.index != null) {
+ return false;
+ }
+ } else if (!index.equals(other.index)) {
+ return false;
+ }
+ if (listener == null) {
+ if (other.listener != null) {
+ return false;
+ }
+ } else if (!listener.equals(other.listener)) {
+ return false;
+ }
+ return true;
+ }
+
+ }
+}
diff --git a/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/internal/resource/index/IndexPersistenceManager.java b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/internal/resource/index/IndexPersistenceManager.java
new file mode 100644
index 00000000000..31864050568
--- /dev/null
+++ b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/internal/resource/index/IndexPersistenceManager.java
@@ -0,0 +1,256 @@
+/*****************************************************************************
+ * Copyright (c) 2016 Christian W. Damus and others.
+ *
+ * All rights reserved. This program and the accompanying materials
+ * are made available under the terms of the Eclipse Public License v1.0
+ * which accompanies this distribution, and is available at
+ * http://www.eclipse.org/legal/epl-v10.html
+ *
+ * Contributors:
+ * Christian W. Damus - Initial API and implementation
+ *
+ *****************************************************************************/
+
+package org.eclipse.papyrus.infra.emf.internal.resource.index;
+
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.util.Collections;
+import java.util.Map;
+import java.util.stream.Collectors;
+import java.util.stream.Stream;
+import java.util.zip.ZipEntry;
+import java.util.zip.ZipInputStream;
+import java.util.zip.ZipOutputStream;
+
+import org.eclipse.core.resources.ISaveContext;
+import org.eclipse.core.resources.ISaveParticipant;
+import org.eclipse.core.resources.ISavedState;
+import org.eclipse.core.runtime.CoreException;
+import org.eclipse.core.runtime.IPath;
+import org.eclipse.core.runtime.IStatus;
+import org.eclipse.core.runtime.Path;
+import org.eclipse.core.runtime.Status;
+import org.eclipse.papyrus.infra.emf.Activator;
+import org.eclipse.papyrus.infra.emf.resource.index.WorkspaceModelIndex;
+
+import com.google.common.collect.Maps;
+
+/**
+ * Persistence manager for {@link WorkspaceModelIndex}es.
+ */
+public class IndexPersistenceManager {
+ private static final IPath INDEX_DIR = new Path("index").addTrailingSeparator(); //$NON-NLS-1$
+
+ private static final String ZIP_ENTRY = "Contents"; //$NON-NLS-1$
+
+ public static final IndexPersistenceManager INSTANCE = new IndexPersistenceManager();
+
+ private final Map<WorkspaceModelIndex<?>, IIndexSaveParticipant> workspaceIndices = Maps.newConcurrentMap();
+
+ // Index file paths relative to the plug-in state location, by index name
+ private Map<String, IPath> indexFiles = Collections.emptyMap();
+
+ /**
+ * Not instantiable by clients.
+ */
+ private IndexPersistenceManager() {
+ super();
+ }
+
+ /**
+ * Initializes the persistence manager with the previous Eclipse session's
+ * saved state.
+ *
+ * @param state
+ * the previous session's state, or {@code null} if none
+ * (for example, if this is the first run)
+ *
+ * @throws CoreException
+ * on failure to initialize the index persistence manager
+ */
+ public void initialize(ISavedState state) throws CoreException {
+ indexFiles = Collections.unmodifiableMap(
+ Stream.of(state.getFiles())
+ .collect(Collectors.toMap(IPath::toString, state::lookup)));
+ }
+
+ /**
+ * Registers a persistent model index.
+ *
+ * @param index
+ * the index to register
+ * @param saveParticipant
+ * its workspace-save delegate
+ *
+ * @return an input stream providing the previous session's index data, or {@code null}
+ * if none is available, in which case presumably a full indexing is required.
+ * The caller is required to {@link InputStream#close() close} this stream
+ */
+ public InputStream addIndex(WorkspaceModelIndex<?> index, IIndexSaveParticipant saveParticipant) {
+ ZipInputStream result = null;
+
+ workspaceIndices.put(index, saveParticipant);
+
+ IPath indexFile = indexFiles.get(index.getName());
+ File storeFile = (indexFile != null) ? getStoreFile(indexFile) : null;
+ if (storeFile != null) {
+ if (storeFile.exists()) {
+ try {
+ result = new ZipInputStream(new FileInputStream(storeFile));
+
+ // Get the Contents entry
+ result.getNextEntry();
+ } catch (Exception e) {
+ Activator.log.error("Failed to open index file for " + index.getName(), e); //$NON-NLS-1$
+ }
+ }
+ }
+
+ return result;
+ }
+
+ /**
+ * Removes an index from the persistence manager.
+ *
+ * @param index
+ * the index to remove
+ */
+ public void removeIndex(WorkspaceModelIndex<?> index) {
+ workspaceIndices.remove(index);
+ }
+
+ private IPath getIndexLocation() {
+ return Activator.getDefault().getStateLocation().append(INDEX_DIR);
+ }
+
+ private File getStoreFile(IPath storePath) {
+ return Activator.getDefault().getStateLocation().append(storePath).toFile();
+ }
+
+ private IPath getStorePath(WorkspaceModelIndex<?> index, int saveNumber) {
+ return INDEX_DIR.append(index.getName()).addFileExtension(String.valueOf(saveNumber));
+ }
+
+ private IPath getStoreLocation(WorkspaceModelIndex<?> index, int saveNumber) {
+ return Activator.getDefault().getStateLocation().append(getStorePath(index, saveNumber));
+ }
+
+ /**
+ * Obtains a workspace save participant to which the bundle's main participant
+ * delegates the index portion of workspace save.
+ * <p>
+ * <b>Note</b> that this delegate must never tell the {@link ISaveContext} that
+ * it needs a {@linkplain ISaveContext#needSaveNumber() save number} or a
+ * {@linkplain ISaveContext#needDelta() delta} as that is the responsibility
+ * of the bundle's save participant. Also, it is only ever invoked on a
+ * full workspace save.
+ * </p>
+ *
+ * @return the workspace save participant delegate
+ */
+ public ISaveParticipant getSaveParticipant() {
+ return new ISaveParticipant() {
+
+ private Map<String, IPath> newIndexFiles;
+
+ @Override
+ public void prepareToSave(ISaveContext context) throws CoreException {
+ // Ensure that our state location index directory exists
+ File indexDirectory = getIndexLocation().toFile();
+ if (!indexDirectory.exists()) {
+ indexDirectory.mkdir();
+ }
+ }
+
+ @Override
+ public void saving(ISaveContext context) throws CoreException {
+ // Save our indices
+ for (Map.Entry<WorkspaceModelIndex<?>, IIndexSaveParticipant> next : workspaceIndices.entrySet()) {
+ WorkspaceModelIndex<?> index = next.getKey();
+ IIndexSaveParticipant save = next.getValue();
+
+ if (save != null) {
+ File storeFile = getStoreLocation(index, context.getSaveNumber()).toFile();
+
+ try (OutputStream store = createStoreOutput(storeFile)) {
+ save.save(index, store);
+ } catch (IOException e) {
+ storeFile.delete(); // In case there's something there, it can't be trusted
+ throw new CoreException(new Status(IStatus.ERROR, Activator.PLUGIN_ID,
+ "Failed to save index " + index.getName(), e)); //$NON-NLS-1$
+ }
+ }
+ }
+
+ // Compute the new index file mappings
+ newIndexFiles = workspaceIndices.keySet().stream()
+ .collect(Collectors.toMap(
+ WorkspaceModelIndex::getName,
+ index -> getStorePath(index, context.getSaveNumber())));
+
+ // Remove old index mappings
+ for (String next : indexFiles.keySet()) {
+ context.map(new Path(next), null);
+ }
+
+ // Add new index mappings
+ for (Map.Entry<String, IPath> next : newIndexFiles.entrySet()) {
+ context.map(new Path(next.getKey()), next.getValue());
+ }
+ }
+
+ private OutputStream createStoreOutput(File storeFile) throws IOException {
+ ZipOutputStream result = new ZipOutputStream(new FileOutputStream(storeFile));
+ ZipEntry entry = new ZipEntry(ZIP_ENTRY);
+ result.putNextEntry(entry);
+ return result;
+ }
+
+ @Override
+ public void doneSaving(ISaveContext context) {
+ // Delete the old index files
+ try {
+ indexFiles.values().forEach(p -> getStoreFile(p).delete());
+ } catch (Exception e) {
+ // This doesn't stop us proceeding
+ Activator.log.error("Failed to clean up old index files", e); //$NON-NLS-1$
+ }
+
+ // Grab our new index files
+ indexFiles = newIndexFiles;
+ newIndexFiles = null;
+ }
+
+ @Override
+ public void rollback(ISaveContext context) {
+ try {
+ if (newIndexFiles != null) {
+ // Delete the new save files and mappings that we created
+ newIndexFiles.values().stream()
+ .map(IndexPersistenceManager.this::getStoreFile)
+ .forEach(File::delete);
+
+ // And the mappings
+ newIndexFiles.keySet().stream()
+ .map(Path::new)
+ .forEach(p -> context.map(p, null));
+
+ newIndexFiles = null;
+
+ // Then restore the old mappings
+ indexFiles.forEach((name, location) -> context.map(new Path(name), location));
+ }
+ } catch (Exception e) {
+ Activator.log.error("Failed to roll back model indices.", e); //$NON-NLS-1$
+ }
+
+ }
+ };
+ }
+
+}
diff --git a/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/internal/resource/index/InternalModelIndex.java b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/internal/resource/index/InternalModelIndex.java
new file mode 100644
index 00000000000..739f84e2135
--- /dev/null
+++ b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/internal/resource/index/InternalModelIndex.java
@@ -0,0 +1,118 @@
+/*****************************************************************************
+ * Copyright (c) 2016 Christian W. Damus and others.
+ *
+ * All rights reserved. This program and the accompanying materials
+ * are made available under the terms of the Eclipse Public License v1.0
+ * which accompanies this distribution, and is available at
+ * http://www.eclipse.org/legal/epl-v10.html
+ *
+ * Contributors:
+ * Christian W. Damus - Initial API and implementation
+ *
+ *****************************************************************************/
+
+package org.eclipse.papyrus.infra.emf.internal.resource.index;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.ObjectInputStream;
+import java.io.ObjectStreamClass;
+import java.util.concurrent.Callable;
+
+import org.eclipse.core.resources.IFile;
+import org.eclipse.core.resources.IProject;
+import org.eclipse.core.runtime.CoreException;
+import org.eclipse.core.runtime.QualifiedName;
+import org.eclipse.core.runtime.content.IContentType;
+import org.eclipse.papyrus.infra.emf.resource.index.WorkspaceModelIndex;
+
+import com.google.common.util.concurrent.ListenableFuture;
+
+/**
+ * Internal implementation details of a {@link WorkspaceModelIndex}.
+ */
+public abstract class InternalModelIndex {
+
+ private final QualifiedName indexKey;
+ private final int maxIndexJobs;
+
+ /** My manager. */
+ private IndexManager manager;
+
+ /** A class loader that knows the classes of the owner (bundle) context. */
+ private ClassLoader ownerClassLoader;
+
+ /**
+ * Initializes me.
+ */
+ public InternalModelIndex(QualifiedName indexKey, int maxIndexJobs) {
+ super();
+
+ this.indexKey = indexKey;
+ this.maxIndexJobs = maxIndexJobs;
+ }
+
+ /**
+ * Initializes me.
+ */
+ public InternalModelIndex(QualifiedName indexKey) {
+ this(indexKey, 0);
+ }
+
+ public final QualifiedName getIndexKey() {
+ return indexKey;
+ }
+
+ public final int getMaxIndexJobs() {
+ return maxIndexJobs;
+ }
+
+ protected final IContentType[] getContentTypes(IFile file) {
+ return manager.getContentTypes(file);
+ }
+
+ /**
+ * Obtains an asynchronous future result that is scheduled to run after
+ * any pending indexing work has completed.
+ *
+ * @param callable
+ * the operation to schedule
+ *
+ * @return the future result of the operation
+ */
+ protected <V> ListenableFuture<V> afterIndex(final Callable<V> callable) {
+ return manager.afterIndex(this, callable);
+ }
+
+ void setOwnerClassLoader(ClassLoader ownerClassLoader) {
+ this.ownerClassLoader = ownerClassLoader;
+ }
+
+ protected final ObjectInputStream createObjectInput(InputStream underlying) throws IOException {
+ return (ownerClassLoader == null)
+ ? new ObjectInputStream(underlying)
+ : new ObjectInputStream(underlying) {
+ @Override
+ protected Class<?> resolveClass(ObjectStreamClass desc) throws IOException, ClassNotFoundException {
+ return Class.forName(desc.getName(), true, ownerClassLoader);
+ }
+ };
+ }
+
+ protected abstract void dispose();
+
+ void start(IndexManager manager) {
+ this.manager = manager;
+ start();
+ }
+
+ protected abstract void start();
+
+ protected abstract boolean match(IFile file);
+
+ protected abstract void process(IFile file) throws CoreException;
+
+ protected abstract void remove(IProject project, IFile file) throws CoreException;
+
+ protected abstract void remove(IProject project) throws CoreException;
+}
diff --git a/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/resource/ICrossReferenceIndex.java b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/resource/ICrossReferenceIndex.java
new file mode 100644
index 00000000000..920eab7628f
--- /dev/null
+++ b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/resource/ICrossReferenceIndex.java
@@ -0,0 +1,274 @@
+/*****************************************************************************
+ * Copyright (c) 2016 Christian W. Damus and others.
+ *
+ * All rights reserved. This program and the accompanying materials
+ * are made available under the terms of the Eclipse Public License v1.0
+ * which accompanies this distribution, and is available at
+ * http://www.eclipse.org/legal/epl-v10.html
+ *
+ * Contributors:
+ * Christian W. Damus - Initial API and implementation
+ *
+ *****************************************************************************/
+
+package org.eclipse.papyrus.infra.emf.resource;
+
+import static org.eclipse.papyrus.infra.emf.internal.resource.InternalIndexUtil.getSemanticModelFileExtensions;
+
+import java.util.Set;
+
+import org.eclipse.core.runtime.CoreException;
+import org.eclipse.core.runtime.jobs.Job;
+import org.eclipse.emf.common.util.URI;
+import org.eclipse.emf.ecore.plugin.EcorePlugin;
+import org.eclipse.emf.ecore.resource.ResourceSet;
+import org.eclipse.papyrus.infra.emf.internal.resource.CrossReferenceIndex;
+import org.eclipse.papyrus.infra.emf.internal.resource.OnDemandCrossReferenceIndex;
+
+import com.google.common.collect.SetMultimap;
+import com.google.common.util.concurrent.ListenableFuture;
+
+
+/**
+ * API for an index of cross-resource proxy references in the workspace, especially
+ * containment proxies of the "shard" variety: controlled units that are not openable
+ * in their own editors but must be opened from the root resource of the controlled unit
+ * graph.
+ *
+ * @since 2.1
+ */
+public interface ICrossReferenceIndex {
+
+ /**
+ * Obtains the cross-reference index for the given resource set.
+ *
+ * @param resourceSet
+ * a resource-set in which resources are managed on which
+ * cross-reference queries are to be applied, or {@code null}
+ * if there is no contextual resource set, in which case
+ * the default heuristic- or otherwise-determined kinds of
+ * resources will be indexed
+ */
+ static ICrossReferenceIndex getInstance(ResourceSet resourceSet) {
+ ICrossReferenceIndex result;
+
+ if (!EcorePlugin.IS_ECLIPSE_RUNNING || Job.getJobManager().isSuspended()) {
+ // We cannot rely on jobs and the workspace to calculate the index
+ // in the background
+ result = new OnDemandCrossReferenceIndex(getSemanticModelFileExtensions(resourceSet));
+ } else {
+ result = CrossReferenceIndex.getInstance();
+ }
+
+ return result;
+ }
+
+ /**
+ * Asynchronously queries the mapping of URIs of resources to URIs of others
+ * that they cross-reference to.
+ *
+ * @return a future result of the mapping of resource URIs to cross-referenced URIs
+ */
+ ListenableFuture<SetMultimap<URI, URI>> getOutgoingCrossReferencesAsync();
+
+ /**
+ * Queries the mapping of URIs of resources to URIs of others
+ * that they cross-reference to.
+ *
+ * @return the mapping of resource URIs to cross-referenced URIs URIs
+ *
+ * @throws CoreException
+ * if the index either fails to compute the cross-references or if
+ * the calling thread is interrupted in waiting for the result
+ */
+ SetMultimap<URI, URI> getOutgoingCrossReferences() throws CoreException;
+
+ /**
+ * Asynchronously queries the URIs of other resources that a given resource
+ * cross-references to.
+ *
+ * @param resourceURI
+ * the URI of a resource
+ * @return a future result of the resource URIs that it cross-references to
+ */
+ ListenableFuture<Set<URI>> getOutgoingCrossReferencesAsync(URI resourceURI);
+
+ /**
+ * Queries the URIs of other resources that a given resource
+ * cross-references to.
+ *
+ * @param resourceURI
+ * the URI of a resource
+ * @return the resource URIs that it cross-references to
+ *
+ * @throws CoreException
+ * if the index either fails to compute the cross-references or if
+ * the calling thread is interrupted in waiting for the result
+ */
+ Set<URI> getOutgoingCrossReferences(URI resourceURI) throws CoreException;
+
+ /**
+ * Asynchronously queries the mapping of URIs of resources to URIs of others
+ * from which they are cross-referenced.
+ *
+ * @return a future result of the mapping of resource URIs to cross-referencing URIs
+ */
+ ListenableFuture<SetMultimap<URI, URI>> getIncomingCrossReferencesAsync();
+
+ /**
+ * Queries the mapping of URIs of resources to URIs of others
+ * from which they are cross-referenced.
+ *
+ * @return the mapping of resource URIs to cross-referencing URIs
+ *
+ * @throws CoreException
+ * if the index either fails to compute the cross-references or if
+ * the calling thread is interrupted in waiting for the result
+ */
+ SetMultimap<URI, URI> getIncomingCrossReferences() throws CoreException;
+
+ /**
+ * Asynchronously queries the URIs of other resources that cross-reference to
+ * a given resource.
+ *
+ * @param resourceURI
+ * the URI of a resource
+ * @return a future result of the resource URIs that cross-reference to it
+ */
+ ListenableFuture<Set<URI>> getIncomingCrossReferencesAsync(URI resourceURI);
+
+ /**
+ * Queries the URIs of other resources that cross-reference to
+ * a given resource.
+ *
+ * @param resourceURI
+ * the URI of a resource
+ * @return the resource URIs that cross-reference to it
+ *
+ * @throws CoreException
+ * if the index either fails to compute the cross-references or if
+ * the calling thread is interrupted in waiting for the result
+ */
+ Set<URI> getIncomingCrossReferences(URI resourceURI) throws CoreException;
+
+ /**
+ * Asynchronously queries whether a resource is a "shard".
+ *
+ * @param resourceURI
+ * the URI of a resource
+ * @return a future result of whether the resource is a "shard"
+ */
+ ListenableFuture<Boolean> isShardAsync(URI resourceURI);
+
+ /**
+ * Queries whether a resource is a "shard".
+ *
+ * @param resourceURI
+ * the URI of a resource
+ * @return whether the resource is a "shard"
+ *
+ * @throws CoreException
+ * if the index either fails to compute the shard-ness or if
+ * the calling thread is interrupted in waiting for the result
+ */
+ boolean isShard(URI resourceURI) throws CoreException;
+
+ /**
+ * Asynchronously queries the mapping of URIs of resources to URIs of shards that are their immediate
+ * children.
+ *
+ * @return a future result of the mapping of resource URIs to shard URIs
+ */
+ ListenableFuture<SetMultimap<URI, URI>> getShardsAsync();
+
+ /**
+ * Queries the mapping of URIs of resources to URIs of shards that are their immediate
+ * children.
+ *
+ * @return the mapping of resource URIs to shard URIs
+ *
+ * @throws CoreException
+ * if the index either fails to compute the shards or if
+ * the calling thread is interrupted in waiting for the result
+ */
+ SetMultimap<URI, URI> getShards() throws CoreException;
+
+ /**
+ * Asynchronously queries the URIs of resources that are immediate shards of a
+ * given resource.
+ *
+ * @param resourceURI
+ * the URI of a resource
+ * @return a future result of the URIs of shards that are its immediate children
+ */
+ ListenableFuture<Set<URI>> getShardsAsync(URI resourceURI);
+
+ /**
+ * Queries the URIs of resources that are immediate shards of a
+ * given resource.
+ *
+ * @param resourceURI
+ * the URI of a resource
+ * @return the URIs of shards that are its immediate children
+ *
+ * @throws CoreException
+ * if the index either fails to compute the shards or if
+ * the calling thread is interrupted in waiting for the result
+ */
+ Set<URI> getShards(URI resourceURI) throws CoreException;
+
+ /**
+ * Asynchronously queries URIs of resources that are immediate parents of a given
+ * (potential) shard resource.
+ *
+ * @param shardURI
+ * the URI of a potential shard resource. It needs not necessarily actually
+ * be a shard, in which case it trivially wouldn't have any parents
+ * @return the future result of the URIs of resources that are immediate parents of
+ * the shard
+ */
+ ListenableFuture<Set<URI>> getParentsAsync(URI shardURI);
+
+ /**
+ * Queries URIs of resources that are immediate parents of a given
+ * (potential) shard resource.
+ *
+ * @param shardURI
+ * the URI of a potential shard resource. It needs not necessarily actually
+ * be a shard, in which case it trivially wouldn't have any parents
+ * @return the URIs of resources that are immediate parents of
+ * the shard
+ *
+ * @throws CoreException
+ * if the index either fails to compute the parents or if
+ * the calling thread is interrupted in waiting for the result
+ */
+ Set<URI> getParents(URI shardURI) throws CoreException;
+
+ /**
+ * Asynchronously queries URIs of resources that are roots (ultimate parents) of a given
+ * (potential) shard resource.
+ *
+ * @param shardURI
+ * the URI of a potential shard resource. It needs not necessarily actually
+ * be a shard, in which case it trivially wouldn't have any parents
+ * @return the future result of the URIs of resources that are roots of its parent graph
+ */
+ ListenableFuture<Set<URI>> getRootsAsync(URI shardURI);
+
+ /**
+ * Queries URIs of resources that are roots (ultimate parents) of a given
+ * (potential) shard resource.
+ *
+ * @param shardURI
+ * the URI of a potential shard resource. It needs not necessarily actually
+ * be a shard, in which case it trivially wouldn't have any parents
+ * @return the URIs of resources that are roots of its parent graph
+ *
+ * @throws CoreException
+ * if the index either fails to compute the roots or if
+ * the calling thread is interrupted in waiting for the result
+ */
+ Set<URI> getRoots(URI shardURI) throws CoreException;
+
+}
diff --git a/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/resource/ShardResourceHelper.java b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/resource/ShardResourceHelper.java
new file mode 100644
index 00000000000..29c004eb5c6
--- /dev/null
+++ b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/resource/ShardResourceHelper.java
@@ -0,0 +1,418 @@
+/*****************************************************************************
+ * Copyright (c) 2016 Christian W. Damus and others.
+ *
+ * All rights reserved. This program and the accompanying materials
+ * are made available under the terms of the Eclipse Public License v1.0
+ * which accompanies this distribution, and is available at
+ * http://www.eclipse.org/legal/epl-v10.html
+ *
+ * Contributors:
+ * Christian W. Damus - Initial API and implementation
+ *
+ *****************************************************************************/
+
+package org.eclipse.papyrus.infra.emf.resource;
+
+import static org.eclipse.papyrus.infra.emf.internal.resource.AbstractCrossReferenceIndex.SHARD_ANNOTATION_SOURCE;
+
+import java.util.Collection;
+import java.util.List;
+
+import org.eclipse.emf.common.command.Command;
+import org.eclipse.emf.common.command.CommandWrapper;
+import org.eclipse.emf.common.command.IdentityCommand;
+import org.eclipse.emf.common.notify.Adapter;
+import org.eclipse.emf.common.notify.Notification;
+import org.eclipse.emf.common.notify.Notifier;
+import org.eclipse.emf.common.notify.impl.AdapterImpl;
+import org.eclipse.emf.ecore.EAnnotation;
+import org.eclipse.emf.ecore.EModelElement;
+import org.eclipse.emf.ecore.EObject;
+import org.eclipse.emf.ecore.EcoreFactory;
+import org.eclipse.emf.ecore.EcorePackage;
+import org.eclipse.emf.ecore.InternalEObject;
+import org.eclipse.emf.ecore.resource.Resource;
+import org.eclipse.emf.ecore.util.EcoreUtil;
+import org.eclipse.emf.edit.command.AddCommand;
+import org.eclipse.emf.edit.command.RemoveCommand;
+import org.eclipse.emf.edit.domain.EditingDomain;
+import org.eclipse.papyrus.infra.emf.utils.EMFHelper;
+import org.eclipse.papyrus.infra.tools.util.TypeUtils;
+
+/**
+ * A convenience wrapper for {@link EObject}s and/or {@link Resource}s that
+ * are dependent "shard" units of a Papyrus model. A shard helper must
+ * always be {@linkplain #close() closed} after it is no longer needed,
+ * because it attaches adapters to the model.
+ *
+ * @since 2.1
+ */
+public class ShardResourceHelper implements AutoCloseable {
+
+ private final Resource resource;
+ private final EObject object;
+
+ private boolean closed;
+ private boolean initialized;
+
+ private EAnnotation annotation;
+ private Adapter annotationAdapter;
+
+ /**
+ * Initializes me on a shard {@code resource} that is expected to contain
+ * only one root element (it doesn't store multiple distinct sub-trees
+ * of the model).
+ *
+ * @param resource
+ * a "resource" resource
+ *
+ * @see #ShardResourceHelper(EObject)
+ */
+ public ShardResourceHelper(Resource resource) {
+ this(resource, null);
+ }
+
+ /**
+ * Initializes me on an {@code element} in a shard resource that uniquely
+ * identifies a sub-tree of potentially more than one stored in the resource.
+ * If there is any possibility that a resource stores multiple sub-trees,
+ * prefer this constructor over {@linkplain #ShardResourceHelper(Resource) the other}.
+ *
+ * @param element
+ * an element in a "resource" resource
+ */
+ public ShardResourceHelper(EObject element) {
+ this(element.eResource(), element);
+ }
+
+ private ShardResourceHelper(Resource resource, EObject object) {
+ super();
+
+ this.resource = resource;
+ this.object = object;
+ }
+
+ /**
+ * Is my resource a shard?
+ *
+ * @return whether my resource is a shard of its parent
+ */
+ public boolean isShard() {
+ return getAnnotation() != null;
+ }
+
+ /**
+ * Changes my resource from a shard to an independent controlled unit, or vice-versa.
+ * In the context of an editor and/or editing-domain, it is usually more appropriate
+ * to use the {@link #getSetShardCommand(boolean)} API for manipulation by command.
+ *
+ * @param isShard
+ * whether my resource should be a shard. If it already matches
+ * this state, then do nothing
+ *
+ * @see #getSetShardCommand(boolean)
+ */
+ public void setShard(boolean isShard) {
+ checkClosed();
+
+ if (isShard != isShard()) {
+ if (getAnnotation() != null) {
+ // We are un-sharding
+ EcoreUtil.remove(getAnnotation());
+ } else {
+ // We are sharding
+ EAnnotation annotation = EcoreFactory.eINSTANCE.createEAnnotation();
+ annotation.setSource(SHARD_ANNOTATION_SOURCE);
+ Notifier annotationOwner;
+
+ EObject shardElement = getShardElement();
+ if (shardElement instanceof EModelElement) {
+ // Add it to the shard element
+ ((EModelElement) shardElement).getEAnnotations().add(annotation);
+ annotationOwner = shardElement;
+ } else if (shardElement != null) {
+ // Add it after the shard element
+ int index = resource.getContents().indexOf(shardElement) + 1;
+ resource.getContents().add(index, annotation);
+ annotationOwner = resource;
+ } else {
+ // Try to add it after the principal model object
+ resource.getContents().add(Math.min(1, resource.getContents().size()), annotation);
+ annotationOwner = resource;
+ }
+
+ // In any case, the parent is the resource storing the element's container
+ if ((shardElement != null) && (shardElement.eContainer() != null)) {
+ annotation.getReferences().add(shardElement.eContainer());
+ }
+
+ setAnnotation(annotation);
+ attachAnnotationAdapter(annotationOwner);
+ }
+ }
+ }
+
+ /**
+ * Finds the element that is the root of the particular sub-tree stored in
+ * this resource, from the context provided by the client.
+ *
+ * @return the shard root element as best determined from the context, or
+ * {@code null} in the worst case that the resource is empty
+ */
+ private EObject getShardElement() {
+ checkClosed();
+
+ EObject result = null;
+
+ if (object != null) {
+ // Find the object in its content tree that is a root of our resource
+ for (result = object; result != null; result = result.eContainer()) {
+ InternalEObject internal = (InternalEObject) result;
+ if (internal.eDirectResource() == resource) {
+ // Found it
+ break;
+ }
+ }
+ }
+
+ if ((result == null) && !resource.getContents().isEmpty()) {
+ // Just take the first element as the shard element
+ result = resource.getContents().get(0);
+ }
+
+ return result;
+ }
+
+ /**
+ * Obtains a command to change my resource from a shard to an independent
+ * controlled unit, or vice-versa.
+ *
+ * @param isShard
+ * whether my resource should be a shard. If it already matches
+ * this state, then the resulting command will have no effect
+ *
+ * @return the set-shard command
+ *
+ * @see #setShard(boolean)
+ */
+ public Command getSetShardCommand(boolean isShard) {
+ Command result;
+
+ if (isShard() == isShard) {
+ result = IdentityCommand.INSTANCE;
+ } else if (getAnnotation() != null) {
+ // Delete the annotation
+ EAnnotation annotation = getAnnotation();
+ if (annotation.getEModelElement() != null) {
+ result = RemoveCommand.create(EMFHelper.resolveEditingDomain(annotation),
+ annotation.getEModelElement(),
+ EcorePackage.Literals.EMODEL_ELEMENT__EANNOTATIONS,
+ annotation);
+ } else {
+ result = new RemoveCommand(EMFHelper.resolveEditingDomain(resource),
+ resource.getContents(),
+ annotation);
+ }
+ } else {
+ // Create the annotation
+ EAnnotation annotation = EcoreFactory.eINSTANCE.createEAnnotation();
+ annotation.setSource(SHARD_ANNOTATION_SOURCE);
+
+ EditingDomain domain;
+ EObject shardElement = getShardElement();
+ Notifier annotationOwner;
+
+ if (shardElement instanceof EModelElement) {
+ // Add it to the shard element
+ domain = EMFHelper.resolveEditingDomain(shardElement);
+ result = AddCommand.create(domain, shardElement,
+ EcorePackage.Literals.EMODEL_ELEMENT__EANNOTATIONS,
+ annotation);
+ annotationOwner = shardElement;
+ } else if (shardElement != null) {
+ // Add it after the shard element
+ int index = resource.getContents().indexOf(shardElement) + 1;
+ domain = EMFHelper.resolveEditingDomain(shardElement);
+ result = new AddCommand(domain, resource.getContents(), annotation, index);
+ annotationOwner = resource;
+ } else {
+ // Try to add it after the principal model object
+ domain = EMFHelper.resolveEditingDomain(resource);
+ int index = Math.min(1, resource.getContents().size());
+ result = new AddCommand(domain, resource.getContents(), annotation, index);
+ annotationOwner = resource;
+ }
+
+ // In any case, the parent is the resource storing the element's container
+ if ((shardElement != null) && (shardElement.eContainer() != null)) {
+ result = result.chain(AddCommand.create(domain, annotation,
+ EcorePackage.Literals.EANNOTATION__REFERENCES,
+ shardElement.eContainer()));
+ }
+
+ // Ensure attachment of the adapter on first execution and record the
+ // annotation, if not already closed
+ result = new CommandWrapper(result) {
+ @Override
+ public void execute() {
+ super.execute();
+
+ if (!ShardResourceHelper.this.isClosed()) {
+ setAnnotation(annotation);
+ attachAnnotationAdapter(annotationOwner);
+ }
+ }
+ };
+ }
+
+ return result;
+ }
+
+ /**
+ * Closes me, ensuring at least that any adapter I have attached to the model
+ * that retains me is detached. Once I have been closed, I cannot be used
+ * any longer.
+ */
+ @Override
+ public void close() {
+ closed = true;
+
+ doClose();
+ }
+
+ protected void doClose() {
+ clearAnnotation();
+ detachAnnotationAdapter();
+ }
+
+ /**
+ * Queries whether I have been {@linkplain #close() closed}.
+ *
+ * @return whether I have been closed
+ */
+ public final boolean isClosed() {
+ return closed;
+ }
+
+ protected final void checkClosed() {
+ if (isClosed()) {
+ throw new IllegalStateException("closed"); //$NON-NLS-1$
+ }
+ }
+
+ private EAnnotation getAnnotation() {
+ checkClosed();
+
+ if (!initialized) {
+ setAnnotation(findAnnotation());
+ initialized = true;
+ }
+
+ return annotation;
+ }
+
+ private EAnnotation findAnnotation() {
+ EAnnotation result = null;
+
+ if (!resource.getContents().isEmpty()) {
+ EObject shardElement = getShardElement();
+ Notifier annotationOwner;
+
+ if (shardElement instanceof EModelElement) {
+ result = ((EModelElement) shardElement).getEAnnotation(SHARD_ANNOTATION_SOURCE);
+ annotationOwner = shardElement;
+ } else {
+ // Maybe it's just in the resource?
+ List<EObject> contents = resource.getContents();
+ annotationOwner = resource;
+
+ if (shardElement != null) {
+ int index = contents.indexOf(shardElement) + 1;
+ if (index < contents.size()) {
+ EAnnotation maybe = TypeUtils.as(contents.get(index), EAnnotation.class);
+ if ((maybe != null) && SHARD_ANNOTATION_SOURCE.equals(maybe.getSource())) {
+ // That's it
+ result = maybe;
+ }
+ }
+ }
+
+ if ((result == null) && (object == null)) {
+ // If we don't have a specific sub-tree in mind, look for any
+ // shard annotation
+ result = contents.stream()
+ .filter(EAnnotation.class::isInstance).map(EAnnotation.class::cast)
+ .filter(a -> SHARD_ANNOTATION_SOURCE.equals(a.getSource()))
+ .findFirst().orElse(null);
+ }
+ }
+
+ if (result != null) {
+ attachAnnotationAdapter(annotationOwner);
+ }
+ }
+
+ return result;
+ }
+
+ private void clearAnnotation() {
+ initialized = false;
+ setAnnotation(null);
+ }
+
+ private void setAnnotation(EAnnotation annotation) {
+ this.annotation = annotation;
+ }
+
+ private void attachAnnotationAdapter(Notifier annotationOwner) {
+ // If we still have the annotation, then it's still attached
+ if (annotationAdapter == null) {
+ annotationAdapter = new AdapterImpl() {
+ @Override
+ public void notifyChanged(Notification msg) {
+ if (msg.getEventType() == Notification.REMOVING_ADAPTER) {
+ // My target was unloaded
+ clearAnnotation();
+ } else if ((msg.getFeature() == EcorePackage.Literals.EMODEL_ELEMENT__EANNOTATIONS)
+ || ((msg.getNotifier() == resource) && (msg.getFeatureID(Resource.class) == Resource.RESOURCE__CONTENTS))) {
+
+ // Annotation of the model element or resource changed
+ boolean clear = false;
+
+ switch (msg.getEventType()) {
+ case Notification.SET:
+ case Notification.UNSET:
+ case Notification.REMOVE:
+ clear = (msg.getOldValue() == getAnnotation());
+ break;
+ case Notification.ADD:
+ case Notification.ADD_MANY:
+ // If we don't have an annotation, we'll try to find it
+ clear = getAnnotation() == null;
+ break;
+ case Notification.REMOVE_MANY:
+ clear = ((Collection<?>) msg.getOldValue()).contains(getAnnotation());
+ break;
+ }
+
+ if (clear) {
+ // In case the annotation moved or was replaced,
+ // we'll compute it again on-the-fly
+ clearAnnotation();
+ }
+ }
+ }
+ };
+
+ annotationOwner.eAdapters().add(annotationAdapter);
+ }
+ }
+
+ private void detachAnnotationAdapter() {
+ if (annotationAdapter != null) {
+ Adapter adapter = annotationAdapter;
+ annotationAdapter = null;
+ adapter.getTarget().eAdapters().remove(adapter);
+ }
+ }
+}
diff --git a/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/resource/ShardResourceLocator.java b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/resource/ShardResourceLocator.java
new file mode 100644
index 00000000000..397e693707a
--- /dev/null
+++ b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/resource/ShardResourceLocator.java
@@ -0,0 +1,178 @@
+/*****************************************************************************
+ * Copyright (c) 2016 Christian W. Damus and others.
+ *
+ * All rights reserved. This program and the accompanying materials
+ * are made available under the terms of the Eclipse Public License v1.0
+ * which accompanies this distribution, and is available at
+ * http://www.eclipse.org/legal/epl-v10.html
+ *
+ * Contributors:
+ * Christian W. Damus - Initial API and implementation
+ *
+ *****************************************************************************/
+
+package org.eclipse.papyrus.infra.emf.resource;
+
+import static org.eclipse.papyrus.infra.emf.internal.resource.InternalIndexUtil.getSemanticModelFileExtensions;
+
+import java.util.HashSet;
+import java.util.Set;
+import java.util.function.Supplier;
+
+import org.eclipse.core.runtime.CoreException;
+import org.eclipse.emf.common.util.TreeIterator;
+import org.eclipse.emf.common.util.URI;
+import org.eclipse.emf.ecore.EObject;
+import org.eclipse.emf.ecore.EReference;
+import org.eclipse.emf.ecore.InternalEObject;
+import org.eclipse.emf.ecore.resource.Resource;
+import org.eclipse.emf.ecore.resource.ResourceSet;
+import org.eclipse.emf.ecore.resource.impl.ResourceSetImpl;
+import org.eclipse.emf.ecore.resource.impl.ResourceSetImpl.ResourceLocator;
+import org.eclipse.emf.ecore.util.EcoreUtil;
+import org.eclipse.emf.ecore.util.InternalEList;
+import org.eclipse.papyrus.infra.emf.Activator;
+
+/**
+ * A {@link ResourceLocator} that can be used with any {@link ResourceSet}
+ * to ensure that when a shard resource is demand-loaded by proxy resolution,
+ * it is loaded from the top down to ensure that dependencies such as profile
+ * applications in UML models are ensured before loading the shard.
+ *
+ * @since 2.1
+ */
+public class ShardResourceLocator extends ResourceLocator {
+
+ private final Set<Resource> inDemandLoadHelper = new HashSet<>();
+
+ private final Supplier<? extends ICrossReferenceIndex> index;
+
+ private final Set<String> semanticModelExtensions;
+
+ /**
+ * Installs me in the given resource set. I use the best available
+ * {@link ICrossReferenceIndex} for resolution of shard relationships.
+ *
+ * @param resourceSet
+ * the resource set for which I shall provide
+ */
+ public ShardResourceLocator(ResourceSetImpl resourceSet) {
+ this(resourceSet, () -> ICrossReferenceIndex.getInstance(resourceSet));
+ }
+
+ /**
+ * Installs me in the given resource set with a particular {@code index}.
+ *
+ * @param resourceSet
+ * the resource set for which I shall provide
+ * @param index
+ * the index to use for resolving shard relationships
+ */
+ public ShardResourceLocator(ResourceSetImpl resourceSet, ICrossReferenceIndex index) {
+ this(resourceSet, () -> index);
+ }
+
+ /**
+ * Installs me in the given resource set with a dynamic {@code index} supplier.
+ *
+ * @param resourceSet
+ * the resource set for which I shall provide
+ * @param index
+ * a dynamic supplier of the index to use for resolving shard relationships
+ */
+ public ShardResourceLocator(ResourceSetImpl resourceSet, Supplier<? extends ICrossReferenceIndex> index) {
+ super(resourceSet);
+
+ this.index = index;
+ this.semanticModelExtensions = getSemanticModelFileExtensions(resourceSet);
+ }
+
+ /**
+ * Handles shard resources by loading their roots first and the chain(s) of resources
+ * all the way down to the shard.
+ */
+ @Override
+ public Resource getResource(URI uri, boolean loadOnDemand) {
+ if (loadOnDemand && uri.isPlatformResource()
+ && semanticModelExtensions.contains(uri.fileExtension())) {
+
+ // Is it already loaded? This saves blocking on the cross-reference index
+ Resource existing = getResource(uri, false);
+ if ((existing == null) || !existing.isLoaded()) {
+ // Do our peculiar process
+ handleShard(uri);
+ }
+ }
+
+ return basicGetResource(uri, loadOnDemand);
+ }
+
+ /**
+ * Handles the case of demand-loading of a shard by loading it from the root resource
+ * on down.
+ *
+ * @param uri
+ * the URI of a resource that may be a shard
+ */
+ protected void handleShard(URI uri) {
+ try {
+ Set<URI> parents = index.get().getParents(uri);
+
+ if (!parents.isEmpty()) {
+ // Load from the root resource down
+ parents.stream()
+ .filter(this::notLoaded)
+ .forEach(r -> loadParentResource(r, uri));
+ }
+ } catch (CoreException e) {
+ Activator.log.log(e.getStatus());
+ }
+ }
+
+ protected boolean notLoaded(URI uri) {
+ Resource resource = resourceSet.getResource(uri, false);
+ return (resource == null) || !resource.isLoaded();
+ }
+
+ protected void loadParentResource(URI parentURI, URI shard) {
+ // This operates recursively on the demand-load helper
+ Resource parent = resourceSet.getResource(parentURI, true);
+
+ // Unlock the shardresource, now
+ inDemandLoadHelper.remove(shard);
+
+ // Scan for the cross-resource containment
+ URI shardURI = normalize(shard);
+ for (TreeIterator<EObject> iter = EcoreUtil.getAllProperContents(parent, false); iter.hasNext();) {
+ EObject next = iter.next();
+ if (next.eIsProxy()) {
+ // Must always only compare normalized URIs to determine 'same resource'
+ URI proxyURI = normalize(((InternalEObject) next).eProxyURI());
+ if (proxyURI.trimFragment().equals(shardURI)) {
+ // This is our parent object
+ EObject parentObject = next.eContainer();
+
+ // Resolve the reference
+ EReference containment = next.eContainmentFeature();
+ if (!containment.isMany()) {
+ // Easy case
+ parentObject.eGet(containment, true);
+ } else {
+ InternalEList<?> list = (InternalEList<?>) parentObject.eGet(containment);
+ int index = list.basicIndexOf(next);
+ if (index >= 0) {
+ // Resolve it
+ list.get(index);
+ }
+ }
+ break;
+ }
+ }
+ }
+ }
+
+ protected URI normalize(URI uri) {
+ return resourceSet.getURIConverter().normalize(uri);
+ }
+
+}
diff --git a/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/resource/index/IWorkspaceModelIndexProvider.java b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/resource/index/IWorkspaceModelIndexProvider.java
new file mode 100644
index 00000000000..fb18c57198b
--- /dev/null
+++ b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/resource/index/IWorkspaceModelIndexProvider.java
@@ -0,0 +1,27 @@
+/*****************************************************************************
+ * Copyright (c) 2016 Christian W. Damus and others.
+ *
+ * All rights reserved. This program and the accompanying materials
+ * are made available under the terms of the Eclipse Public License v1.0
+ * which accompanies this distribution, and is available at
+ * http://www.eclipse.org/legal/epl-v10.html
+ *
+ * Contributors:
+ * Christian W. Damus - Initial API and implementation
+ *
+ *****************************************************************************/
+
+package org.eclipse.papyrus.infra.emf.resource.index;
+
+import java.util.function.Supplier;
+
+/**
+ * A provider of a model index on the <tt>org.eclipse.papyrus.infra.emf.index</tt>
+ * extension point.
+ *
+ * @since 2.1
+ */
+@FunctionalInterface
+public interface IWorkspaceModelIndexProvider extends Supplier<WorkspaceModelIndex<?>> {
+ // Nothing to add
+}
diff --git a/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/resource/index/WorkspaceModelIndex.java b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/resource/index/WorkspaceModelIndex.java
index 98e6b063472..91b17fc5ef3 100644
--- a/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/resource/index/WorkspaceModelIndex.java
+++ b/plugins/infra/emf/org.eclipse.papyrus.infra.emf/src/org/eclipse/papyrus/infra/emf/resource/index/WorkspaceModelIndex.java
@@ -1,5 +1,5 @@
/*****************************************************************************
- * Copyright (c) 2014, 2015 Christian W. Damus and others.
+ * Copyright (c) 2014, 2016 Christian W. Damus and others.
*
* All rights reserved. This program and the accompanying materials
* are made available under the terms of the Eclipse Public License v1.0
@@ -15,63 +15,41 @@ package org.eclipse.papyrus.infra.emf.resource.index;
import java.io.IOException;
import java.io.InputStream;
-import java.util.Arrays;
-import java.util.Collection;
-import java.util.Deque;
+import java.io.ObjectInput;
+import java.io.ObjectInputStream;
+import java.io.ObjectOutput;
+import java.io.ObjectOutputStream;
+import java.io.OutputStream;
+import java.io.Serializable;
+import java.util.ArrayList;
+import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.Callable;
-import java.util.concurrent.ConcurrentLinkedQueue;
-import java.util.concurrent.CopyOnWriteArrayList;
-import java.util.concurrent.ExecutorService;
-import java.util.concurrent.Future;
-import java.util.concurrent.Semaphore;
-import java.util.concurrent.atomic.AtomicBoolean;
-import java.util.concurrent.atomic.AtomicInteger;
-import java.util.concurrent.locks.Condition;
-import java.util.concurrent.locks.Lock;
-import java.util.concurrent.locks.ReentrantLock;
+import java.util.stream.Collectors;
import org.eclipse.core.resources.IFile;
import org.eclipse.core.resources.IProject;
import org.eclipse.core.resources.IResource;
-import org.eclipse.core.resources.IResourceChangeEvent;
-import org.eclipse.core.resources.IResourceChangeListener;
-import org.eclipse.core.resources.IResourceDelta;
-import org.eclipse.core.resources.IResourceDeltaVisitor;
-import org.eclipse.core.resources.IResourceVisitor;
-import org.eclipse.core.resources.IWorkspace;
+import org.eclipse.core.resources.IWorkspaceRoot;
import org.eclipse.core.resources.ResourcesPlugin;
import org.eclipse.core.runtime.CoreException;
-import org.eclipse.core.runtime.IProgressMonitor;
-import org.eclipse.core.runtime.IStatus;
+import org.eclipse.core.runtime.IPath;
+import org.eclipse.core.runtime.Path;
import org.eclipse.core.runtime.Platform;
import org.eclipse.core.runtime.QualifiedName;
-import org.eclipse.core.runtime.Status;
-import org.eclipse.core.runtime.SubMonitor;
import org.eclipse.core.runtime.content.IContentType;
-import org.eclipse.core.runtime.content.IContentTypeManager;
-import org.eclipse.core.runtime.jobs.IJobChangeEvent;
-import org.eclipse.core.runtime.jobs.IJobChangeListener;
-import org.eclipse.core.runtime.jobs.Job;
-import org.eclipse.core.runtime.jobs.JobChangeAdapter;
-import org.eclipse.osgi.util.NLS;
-import org.eclipse.papyrus.infra.core.utils.JobBasedFuture;
-import org.eclipse.papyrus.infra.core.utils.JobExecutorService;
import org.eclipse.papyrus.infra.emf.Activator;
-import org.eclipse.papyrus.infra.tools.util.ReferenceCounted;
+import org.eclipse.papyrus.infra.emf.internal.resource.index.IIndexSaveParticipant;
+import org.eclipse.papyrus.infra.emf.internal.resource.index.IndexManager;
+import org.eclipse.papyrus.infra.emf.internal.resource.index.IndexPersistenceManager;
+import org.eclipse.papyrus.infra.emf.internal.resource.index.InternalModelIndex;
import com.google.common.base.Function;
-import com.google.common.collect.ArrayListMultimap;
import com.google.common.collect.HashMultimap;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.ImmutableSet;
-import com.google.common.collect.Iterables;
-import com.google.common.collect.Lists;
-import com.google.common.collect.Maps;
-import com.google.common.collect.Multimap;
-import com.google.common.collect.Queues;
import com.google.common.collect.SetMultimap;
import com.google.common.util.concurrent.Futures;
import com.google.common.util.concurrent.ListenableFuture;
@@ -79,58 +57,86 @@ import com.google.common.util.concurrent.ListenableFuture;
/**
* A general-purpose index of model resources in the Eclipse workspace.
*/
-public class WorkspaceModelIndex<T> {
- private static final int MAX_INDEX_RETRIES = 3;
+public class WorkspaceModelIndex<T> extends InternalModelIndex {
+ private static final long INDEX_RECORD_SERIAL_VERSION = 1L;
private final IndexHandler<? extends T> indexer;
+ private final PersistentIndexHandler<T> pIndexer;
- private final QualifiedName indexKey;
+ private final String indexName;
private final IContentType contentType;
+ private final IWorkspaceRoot wsRoot = ResourcesPlugin.getWorkspace().getRoot();
private final SetMultimap<IProject, IFile> index = HashMultimap.create();
- private final IResourceChangeListener workspaceListener = new WorkspaceListener();
- private final Map<IProject, AbstractIndexJob> activeJobs = Maps.newHashMap();
- private final ContentTypeService contentTypeService;
private final Set<String> fileExtensions;
-
- private final JobWrangler jobWrangler;
-
- private final CopyOnWriteArrayList<IWorkspaceModelIndexListener> listeners = Lists.newCopyOnWriteArrayList();
+ private boolean started;
public WorkspaceModelIndex(String name, String contentType, IndexHandler<? extends T> indexer) {
this(name, contentType, indexer, 0);
}
public WorkspaceModelIndex(String name, String contentType, IndexHandler<? extends T> indexer, int maxConcurrentJobs) {
- super();
+ this(name, contentType,
+ Platform.getContentTypeManager().getContentType(contentType).getFileSpecs(IContentType.FILE_EXTENSION_SPEC),
+ indexer, maxConcurrentJobs);
+ }
+
+ /**
+ * @since 2.1
+ */
+ public WorkspaceModelIndex(String name, String contentType, String[] fileExtensions, IndexHandler<? extends T> indexer, int maxConcurrentJobs) {
+ this(name, contentType, fileExtensions, indexer, null, maxConcurrentJobs);
+ }
+
+ /**
+ * @since 2.1
+ */
+ public WorkspaceModelIndex(String name, String contentType, PersistentIndexHandler<T> indexer) {
+ this(name, contentType, indexer, 0);
+ }
+
+ /**
+ * @since 2.1
+ */
+ public WorkspaceModelIndex(String name, String contentType, PersistentIndexHandler<T> indexer, int maxConcurrentJobs) {
+ this(name, contentType,
+ Platform.getContentTypeManager().getContentType(contentType).getFileSpecs(IContentType.FILE_EXTENSION_SPEC),
+ indexer, maxConcurrentJobs);
+ }
- this.indexKey = new QualifiedName("org.eclipse.papyrus.modelindex", name); //$NON-NLS-1$
+ /**
+ * @since 2.1
+ */
+ public WorkspaceModelIndex(String name, String contentType, String[] fileExtensions, PersistentIndexHandler<T> indexer, int maxConcurrentJobs) {
+ this(name, contentType, fileExtensions, indexer, indexer, maxConcurrentJobs);
+ }
+
+ private WorkspaceModelIndex(String name, String contentType, String[] fileExtensions, IndexHandler<? extends T> indexer, PersistentIndexHandler<T> pIndexer, int maxConcurrentJobs) {
+ super(new QualifiedName(Activator.PLUGIN_ID, "index:" + name), maxConcurrentJobs); //$NON-NLS-1$
+
+ this.indexName = name;
this.contentType = Platform.getContentTypeManager().getContentType(contentType);
this.indexer = indexer;
+ this.pIndexer = pIndexer;
- String[] fileSpecs = this.contentType.getFileSpecs(IContentType.FILE_EXTENSION_SPEC);
- if ((fileSpecs != null) && (fileSpecs.length > 0)) {
- fileExtensions = ImmutableSet.copyOf(fileSpecs);
+ if ((fileExtensions != null) && (fileExtensions.length > 0)) {
+ this.fileExtensions = ImmutableSet.copyOf(fileExtensions);
} else {
- fileExtensions = null;
+ this.fileExtensions = null;
}
-
- contentTypeService = ContentTypeService.getInstance();
- jobWrangler = new JobWrangler(maxConcurrentJobs);
-
- startIndex();
}
+ @Override
public void dispose() {
- ResourcesPlugin.getWorkspace().removeResourceChangeListener(workspaceListener);
- Job.getJobManager().cancel(this);
- ContentTypeService.dispose(contentTypeService);
+ if (pIndexer != null) {
+ IndexPersistenceManager.INSTANCE.removeIndex(this);
+ }
synchronized (index) {
for (IFile next : index.values()) {
try {
- next.setSessionProperty(indexKey, null);
+ next.setSessionProperty(getIndexKey(), null);
} catch (CoreException e) {
// Just continue, best-effort. There's nothing else to do
}
@@ -140,23 +146,153 @@ public class WorkspaceModelIndex<T> {
}
}
- private void startIndex() {
- IWorkspace workspace = ResourcesPlugin.getWorkspace();
- workspace.addResourceChangeListener(workspaceListener, IResourceChangeEvent.POST_CHANGE);
+ /**
+ * @since 2.1
+ */
+ @Override
+ protected final void start() {
+ if (started) {
+ throw new IllegalStateException("index already started: " + getName()); //$NON-NLS-1$
+ }
+ started = true;
- index(Arrays.asList(workspace.getRoot().getProjects()));
+ // If we support persistence, initialize from the store
+ if (pIndexer != null) {
+ InputStream storeInput = IndexPersistenceManager.INSTANCE.addIndex(this, createSaveParticipant());
+ if (storeInput != null) {
+ try {
+ loadIndex(storeInput);
+ } catch (IOException e) {
+ // The input was already closed, if it could be
+ Activator.log.error("Failed to load index data for " + getName(), e); //$NON-NLS-1$
+ }
+ }
+ }
}
- void index(Collection<? extends IProject> projects) {
- List<IndexProjectJob> jobs = Lists.newArrayListWithCapacity(projects.size());
- for (IProject next : projects) {
- jobs.add(new IndexProjectJob(next));
+ private void loadIndex(InputStream storeInput) throws IOException {
+ List<IndexRecord> store = loadStore(storeInput);
+
+ synchronized (index) {
+ for (IndexRecord record : store) {
+ if (record.file.isAccessible()) {
+ try {
+ record.file.setSessionProperty(getIndexKey(), record);
+ index.put(record.file.getProject(), record.file);
+ } catch (CoreException e) {
+ // Doesn't matter; it will be indexed from scratch, then
+ Activator.log.log(e.getStatus());
+ }
+ }
+ }
}
- schedule(jobs);
}
- void index(IProject project) {
- schedule(new IndexProjectJob(project));
+ private List<IndexRecord> loadStore(InputStream storeInput) throws IOException {
+ List<IndexRecord> result = Collections.emptyList();
+
+ try (InputStream outer = storeInput; ObjectInputStream input = createObjectInput(outer)) {
+ // Load the version. So far, we're at the first version
+ long version = input.readLong();
+ if (version != INDEX_RECORD_SERIAL_VERSION) {
+ throw new IOException("Unexpected index record serial version " + version); //$NON-NLS-1$
+ }
+
+ // Read the number of records
+ int count = input.readInt();
+ result = new ArrayList<>(count);
+
+ // Read the records
+ for (int i = 0; i < count; i++) {
+ try {
+ result.add(readIndexRecord(input));
+ } catch (ClassNotFoundException e) {
+ throw new IOException(e);
+ }
+ }
+ }
+
+ return result;
+ }
+
+ private IndexRecord readIndexRecord(ObjectInput in) throws IOException, ClassNotFoundException {
+ // Load the file
+ IPath path = new Path((String) in.readObject());
+ IFile file = wsRoot.getFile(path);
+
+ // Load the index data
+ @SuppressWarnings("unchecked")
+ T index = (T) in.readObject();
+
+ return new IndexRecord(file, index);
+ }
+
+ private IIndexSaveParticipant createSaveParticipant() {
+ return new IIndexSaveParticipant() {
+ @Override
+ public void save(WorkspaceModelIndex<?> index, OutputStream storeOutput) throws IOException, CoreException {
+ if (index == WorkspaceModelIndex.this) {
+ List<IndexRecord> store;
+
+ synchronized (index) {
+ store = index.index.values().stream()
+ .filter(IResource::isAccessible)
+ .map(f -> {
+ IndexRecord result = null;
+
+ try {
+ @SuppressWarnings("unchecked")
+ IndexRecord __ = (IndexRecord) f.getSessionProperty(getIndexKey());
+ result = __;
+ } catch (CoreException e) {
+ // Doesn't matter; we'll just index it next time
+ Activator.log.log(e.getStatus());
+ }
+
+ return result;
+ })
+ .collect(Collectors.toList());
+ }
+
+ saveStore(store, storeOutput);
+ }
+ }
+ };
+ }
+
+ private void saveStore(List<IndexRecord> store, OutputStream storeOutput) throws IOException {
+ try (ObjectOutputStream output = new ObjectOutputStream(storeOutput)) {
+ // Write the version
+ output.writeLong(INDEX_RECORD_SERIAL_VERSION);
+
+ // Write the number of records
+ output.writeInt(store.size());
+
+ // Write the records
+ for (IndexRecord next : store) {
+ writeIndexRecord(next, output);
+ }
+ }
+ }
+
+ private void writeIndexRecord(IndexRecord record, ObjectOutput out) throws IOException {
+ out.writeObject(record.file.getFullPath().toPortableString());
+ out.writeObject(record.index);
+ }
+
+ /**
+ * Obtains the name of this index.
+ *
+ * @return my name
+ * @since 2.1
+ */
+ public final String getName() {
+ return indexName;
+ }
+
+ @Override
+ public String toString() {
+ return String.format("WorkspaceModelIndex(%s)", getName()); //$NON-NLS-1$
}
/**
@@ -174,8 +310,9 @@ public class WorkspaceModelIndex<T> {
}
/**
- * Obtains an asynchronous future result that is scheduled to run after any pending indexing work has completed.
- * The {@code callable} is invoked under synchronization on the index, so it must be careful about how it
+ * Obtains an asynchronous future result that is scheduled to run after any
+ * pending indexing work has completed. The {@code callable} is invoked under
+ * synchronization on the index, so it must be careful about how it
* synchronizes on other objects to avoid deadlocks.
*
* @param callable
@@ -183,39 +320,13 @@ public class WorkspaceModelIndex<T> {
*
* @return the future result of the operation
*/
- public <V> ListenableFuture<V> afterIndex(final Callable<V> callable) {
- ListenableFuture<V> result;
-
- if (Job.getJobManager().find(this).length == 0) {
- // Result is available now
- try {
- result = Futures.immediateFuture(callable.call());
- } catch (Exception e) {
- result = Futures.immediateFailedFuture(e);
+ @Override
+ public <V> ListenableFuture<V> afterIndex(Callable<V> callable) {
+ return super.afterIndex(() -> {
+ synchronized (index) {
+ return callable.call();
}
- } else {
- JobBasedFuture<V> job = new JobBasedFuture<V>(NLS.bind("Wait for model index \"{0}\"", indexKey.getLocalName())) {
- {
- // setSystem(true);
- }
-
- @Override
- protected V compute(IProgressMonitor monitor) throws Exception {
- V result;
-
- Job.getJobManager().join(WorkspaceModelIndex.this, monitor);
- synchronized (index) {
- result = callable.call();
- }
-
- return result;
- }
- };
- job.schedule();
- result = job;
- }
-
- return result;
+ });
}
/**
@@ -259,9 +370,9 @@ public class WorkspaceModelIndex<T> {
for (IFile next : index.values()) {
try {
@SuppressWarnings("unchecked")
- T value = (T) next.getSessionProperty(indexKey);
- if (value != null) {
- result.put(next, value);
+ IndexRecord record = (IndexRecord) next.getSessionProperty(getIndexKey());
+ if (record != null) {
+ result.put(next, record.index);
}
} catch (CoreException e) {
Activator.log.error("Failed to access index data for file " + next.getFullPath(), e); //$NON-NLS-1$
@@ -271,17 +382,32 @@ public class WorkspaceModelIndex<T> {
return result.build();
}
- void process(IFile file) throws CoreException {
+ /**
+ * @since 2.1
+ */
+ @Override
+ protected final void process(IFile file) throws CoreException {
IProject project = file.getProject();
if (match(file)) {
- add(project, file);
+ @SuppressWarnings("unchecked")
+ IndexRecord record = (IndexRecord) file.getSessionProperty(getIndexKey());
+ if ((record == null) || record.isObsolete()) {
+ add(project, file);
+ } else {
+ // If it's not obsolete, then we're loading it from persistent storage
+ init(project, file, record);
+ }
} else {
remove(project, file);
}
}
- boolean match(IFile file) {
+ /**
+ * @since 2.1
+ */
+ @Override
+ protected final boolean match(IFile file) {
boolean result = false;
// Don't even attempt to match the content type if the file extension doesn't match.
@@ -291,7 +417,7 @@ public class WorkspaceModelIndex<T> {
&& ((fileExtensions == null) || fileExtensions.contains(file.getFileExtension()))
&& file.isSynchronized(IResource.DEPTH_ZERO)) {
- IContentType[] contentTypes = contentTypeService.getContentTypes(file);
+ IContentType[] contentTypes = getContentTypes(file);
if (contentTypes != null) {
for (int i = 0; (i < contentTypes.length) && !result; i++) {
result = contentTypes[i].isKindOf(contentType);
@@ -302,180 +428,72 @@ public class WorkspaceModelIndex<T> {
return result;
}
- void add(IProject project, IFile file) throws CoreException {
- synchronized (index) {
- index.put(project, file);
- file.setSessionProperty(indexKey, indexer.index(file));
+ void init(IProject project, IFile file, IndexRecord record) throws CoreException {
+ if (pIndexer.load(file, record.index)) {
+ synchronized (index) {
+ index.put(project, file);
+ file.setSessionProperty(getIndexKey(), record);
+ }
}
}
- void remove(IProject project, IFile file) throws CoreException {
- synchronized (index) {
- index.remove(project, file);
- indexer.unindex(file);
+ void add(IProject project, IFile file) throws CoreException {
+ T data = indexer.index(file);
- if (file.exists()) {
- file.setSessionProperty(indexKey, null);
- }
+ synchronized (index) {
+ index.put(project, file);
+ file.setSessionProperty(getIndexKey(), new IndexRecord(file, data));
}
}
- void remove(IProject project) throws CoreException {
+ /**
+ * @since 2.1
+ */
+ @Override
+ protected final void remove(IProject project, IFile file) throws CoreException {
+ boolean unindex;
+
synchronized (index) {
- if (index.containsKey(project)) {
- for (IFile next : index.get(project)) {
- indexer.unindex(next);
- }
- index.removeAll(project);
- }
+ // Don't need to do any work on the index data if
+ // this wasn't in the index in the first place
+ unindex = index.remove(project, file);
}
- }
- ReindexProjectJob reindex(IProject project, Iterable<? extends IndexDelta> deltas) {
- ReindexProjectJob result = null;
-
- synchronized (activeJobs) {
- AbstractIndexJob active = activeJobs.get(project);
-
- if (active != null) {
- switch (active.kind()) {
- case REINDEX:
- @SuppressWarnings("unchecked")
- ReindexProjectJob reindex = (ReindexProjectJob) active;
- reindex.addDeltas(deltas);
- break;
- case INDEX:
- @SuppressWarnings("unchecked")
- IndexProjectJob index = (IndexProjectJob) active;
- ReindexProjectJob followup = index.getFollowup();
- if (followup != null) {
- followup.addDeltas(deltas);
- } else {
- followup = new ReindexProjectJob(project, deltas);
- index.setFollowup(followup);
- }
- break;
- case MASTER:
- throw new IllegalStateException("Master job is in the active table."); //$NON-NLS-1$
+ if (unindex) {
+ try {
+ indexer.unindex(file);
+ } finally {
+ if (file.exists()) {
+ file.setSessionProperty(getIndexKey(), null);
}
- } else {
- // No active job. We'll need a new one
- result = new ReindexProjectJob(project, deltas);
}
}
-
- return result;
}
- IResourceVisitor getWorkspaceVisitor(final IProgressMonitor monitor) {
- return new IResourceVisitor() {
-
- @Override
- public boolean visit(IResource resource) throws CoreException {
- if (resource.getType() == IResource.FILE) {
- process((IFile) resource);
- }
-
- return !monitor.isCanceled();
- }
- };
- }
+ /**
+ * @since 2.1
+ */
+ @Override
+ protected final void remove(IProject project) throws CoreException {
+ Set<IFile> files;
- private void schedule(Collection<? extends AbstractIndexJob> jobs) {
- // Synchronize on the active jobs because this potentially alters the wrangler's follow-up job
- synchronized (activeJobs) {
- jobWrangler.add(jobs);
+ synchronized (index) {
+ files = index.containsKey(project)
+ ? index.removeAll(project)
+ : null;
}
- }
- private void schedule(AbstractIndexJob job) {
- // Synchronize on the active jobs because this potentially alters the wrangler's follow-up job
- synchronized (activeJobs) {
- jobWrangler.add(job);
+ if (files != null) {
+ files.forEach(indexer::unindex);
}
}
public void addListener(IWorkspaceModelIndexListener listener) {
- listeners.addIfAbsent(listener);
+ IndexManager.getInstance().addListener(this, listener);
}
public void removeListener(IWorkspaceModelIndexListener listener) {
- listeners.remove(listener);
- }
-
- private void notifyStarting(AbstractIndexJob indexJob) {
- if (!listeners.isEmpty()) {
- WorkspaceModelIndexEvent event;
-
- switch (indexJob.kind()) {
- case INDEX:
- event = new WorkspaceModelIndexEvent(this, WorkspaceModelIndexEvent.ABOUT_TO_CALCULATE, indexJob.getProject());
- for (IWorkspaceModelIndexListener next : listeners) {
- try {
- next.indexAboutToCalculate(event);
- } catch (Exception e) {
- Activator.log.error("Uncaught exception in index listsner.", e); //$NON-NLS-1$
- }
- }
- break;
- case REINDEX:
- event = new WorkspaceModelIndexEvent(this, WorkspaceModelIndexEvent.ABOUT_TO_RECALCULATE, indexJob.getProject());
- for (IWorkspaceModelIndexListener next : listeners) {
- try {
- next.indexAboutToRecalculate(event);
- } catch (Exception e) {
- Activator.log.error("Uncaught exception in index listsner.", e); //$NON-NLS-1$
- }
- }
- break;
- case MASTER:
- // Pass
- break;
- }
- }
- }
-
- private void notifyFinished(AbstractIndexJob indexJob, IStatus status) {
- if (!listeners.isEmpty()) {
- WorkspaceModelIndexEvent event;
-
- if ((status != null) && (status.getSeverity() >= IStatus.ERROR)) {
- event = new WorkspaceModelIndexEvent(this, WorkspaceModelIndexEvent.FAILED, indexJob.getProject());
- for (IWorkspaceModelIndexListener next : listeners) {
- try {
- next.indexFailed(event);
- } catch (Exception e) {
- Activator.log.error("Uncaught exception in index listsner.", e); //$NON-NLS-1$
- }
- }
- } else {
- switch (indexJob.kind()) {
- case INDEX:
- event = new WorkspaceModelIndexEvent(this, WorkspaceModelIndexEvent.CALCULATED, indexJob.getProject());
- for (IWorkspaceModelIndexListener next : listeners) {
- try {
- next.indexCalculated(event);
- } catch (Exception e) {
- Activator.log.error("Uncaught exception in index listsner.", e); //$NON-NLS-1$
- }
- }
- break;
- case REINDEX:
- event = new WorkspaceModelIndexEvent(this, WorkspaceModelIndexEvent.RECALCULATED, indexJob.getProject());
- for (IWorkspaceModelIndexListener next : listeners) {
- try {
- next.indexRecalculated(event);
- } catch (Exception e) {
- Activator.log.error("Uncaught exception in index listsner.", e); //$NON-NLS-1$
- }
- }
- break;
- case MASTER:
- // Pass
- break;
- }
- }
- }
+ IndexManager.getInstance().removeListener(this, listener);
}
//
@@ -505,537 +523,48 @@ public class WorkspaceModelIndex<T> {
void unindex(IFile file);
}
- private enum JobKind {
- MASTER, INDEX, REINDEX;
-
- boolean isSystem() {
- return this != MASTER;
- }
- }
-
- private abstract class AbstractIndexJob extends Job {
- private final IProject project;
-
- private volatile Semaphore permit;
-
- AbstractIndexJob(String name, IProject project) {
- super(name);
-
- this.project = project;
- this.permit = permit;
-
- if (project != null) {
- setRule(project);
- synchronized (activeJobs) {
- if (!activeJobs.containsKey(project)) {
- activeJobs.put(project, this);
- }
- }
- }
-
- setSystem(kind().isSystem());
- }
-
- @Override
- public boolean belongsTo(Object family) {
- return family == WorkspaceModelIndex.this;
- }
-
- final IProject getProject() {
- return project;
- }
-
- abstract JobKind kind();
-
- @Override
- protected final IStatus run(IProgressMonitor monitor) {
- IStatus result;
-
- try {
- result = doRun(monitor);
- } finally {
- synchronized (activeJobs) {
- AbstractIndexJob followup = getFollowup();
-
- if (project != null) {
- if (followup == null) {
- activeJobs.remove(project);
- } else {
- activeJobs.put(project, followup);
- }
- }
-
- if (followup != null) {
- // Kick off the follow-up job
- WorkspaceModelIndex.this.schedule(followup);
- }
- }
- }
-
- return result;
- }
-
- final Semaphore getPermit() {
- return permit;
- }
-
- final void setPermit(Semaphore permit) {
- this.permit = permit;
- }
-
- protected abstract IStatus doRun(IProgressMonitor monitor);
-
- protected AbstractIndexJob getFollowup() {
- return null;
- }
- }
-
- private class JobWrangler extends AbstractIndexJob {
- private final Lock lock = new ReentrantLock();
-
- private final Deque<AbstractIndexJob> queue = Queues.newArrayDeque();
-
- private final AtomicBoolean active = new AtomicBoolean();
- private final Semaphore indexJobSemaphore;
-
- JobWrangler(int maxConcurrentJobs) {
- super("Workspace model indexer", null);
-
- indexJobSemaphore = new Semaphore((maxConcurrentJobs <= 0) ? Integer.MAX_VALUE : maxConcurrentJobs);
- }
-
- @Override
- JobKind kind() {
- return JobKind.MASTER;
- }
-
- void add(AbstractIndexJob job) {
- lock.lock();
-
- try {
- scheduleIfNeeded();
- queue.add(job);
- } finally {
- lock.unlock();
- }
- }
-
- private void scheduleIfNeeded() {
- if (active.compareAndSet(false, true)) {
- // I am a new job
- schedule();
- }
- }
-
- void add(Iterable<? extends AbstractIndexJob> jobs) {
- lock.lock();
-
- try {
- for (AbstractIndexJob next : jobs) {
- add(next);
- }
- } finally {
- lock.unlock();
- }
- }
-
- @Override
- protected IStatus doRun(IProgressMonitor progressMonitor) {
- final AtomicInteger pending = new AtomicInteger(); // How many permits have we issued?
- final Condition pendingChanged = lock.newCondition();
-
- final SubMonitor monitor = SubMonitor.convert(progressMonitor, IProgressMonitor.UNKNOWN);
-
- IStatus result = Status.OK_STATUS;
-
- IJobChangeListener listener = new JobChangeAdapter() {
- private final Map<IProject, Integer> retries = Maps.newHashMap();
-
- private Semaphore getIndexJobPermit(Job job) {
- return (job instanceof WorkspaceModelIndex<?>.AbstractIndexJob)
- ? ((WorkspaceModelIndex<?>.AbstractIndexJob) job).getPermit()
- : null;
- }
-
- @Override
- public void aboutToRun(IJobChangeEvent event) {
- Job starting = event.getJob();
-
- if (getIndexJobPermit(starting) == indexJobSemaphore) {
- // one of mine is starting
- @SuppressWarnings("unchecked")
- AbstractIndexJob indexJob = (AbstractIndexJob) starting;
- notifyStarting(indexJob);
- }
- }
-
- @Override
- public void done(IJobChangeEvent event) {
- final Job finished = event.getJob();
- if (getIndexJobPermit(finished) == indexJobSemaphore) {
- try {
- // one of mine has finished
- @SuppressWarnings("unchecked")
- AbstractIndexJob indexJob = (AbstractIndexJob) finished;
- IProject project = indexJob.getProject();
-
- notifyFinished(indexJob, event.getResult());
-
- if (project != null) {
- synchronized (retries) {
- if ((event.getResult() != null) && (event.getResult().getSeverity() >= IStatus.ERROR)) {
- // Indexing failed to complete. Need to re-build the index
- int count = retries.containsKey(project) ? retries.get(project) : 0;
- if (count++ < MAX_INDEX_RETRIES) {
- // Only retry up to three times
- index(project);
- }
- retries.put(project, ++count);
- } else {
- // Successful re-indexing. Forget the retries
- retries.remove(project);
- }
- }
- }
- } finally {
- // Release this job's permit for the next one in the queue
- indexJobSemaphore.release();
-
- // And it's no longer pending
- pending.decrementAndGet();
-
- lock.lock();
- try {
- pendingChanged.signalAll();
- } finally {
- lock.unlock();
- }
- }
- }
- }
- };
-
- getJobManager().addJobChangeListener(listener);
-
- lock.lock();
-
- try {
- out: for (;;) {
- for (AbstractIndexJob next = queue.poll(); next != null; next = queue.poll()) {
- lock.unlock();
- try {
- if (monitor.isCanceled()) {
- Thread.currentThread().interrupt();
- }
-
- // Enforce the concurrent jobs limit
- indexJobSemaphore.acquire();
- next.setPermit(indexJobSemaphore);
- pending.incrementAndGet();
-
- // Now go
- next.schedule();
- } catch (InterruptedException e) {
- // We were cancelled. Push this job back and re-schedule
- lock.lock();
- try {
- queue.addFirst(next);
- } finally {
- lock.unlock();
- }
- result = Status.CANCEL_STATUS;
- break out;
- } finally {
- lock.lock();
- }
- }
-
- if ((pending.get() <= 0) && queue.isEmpty()) {
- // Nothing left to wait for
- break out;
- } else if (pending.get() > 0) {
- try {
- if (monitor.isCanceled()) {
- Thread.currentThread().interrupt();
- }
-
- pendingChanged.await();
- } catch (InterruptedException e) {
- // We were cancelled. Re-schedule
- result = Status.CANCEL_STATUS;
- break out;
- }
- }
- }
-
- // We've finished wrangling index jobs, for now
- } finally {
- active.compareAndSet(true, false);
-
- // If we were canceled then we re-schedule after a delay to recover
- if (result == Status.CANCEL_STATUS) {
- // We cannot un-cancel a job, so we must replace ourselves with a new job
- schedule(1000L);
- } else {
- // Double-check
- if (!queue.isEmpty()) {
- // We'll have to go around again
- scheduleIfNeeded();
- }
- }
-
- lock.unlock();
-
- getJobManager().removeJobChangeListener(listener);
- }
-
- return result;
- }
- }
-
- private class IndexProjectJob extends AbstractIndexJob {
- private ReindexProjectJob followup;
-
- IndexProjectJob(IProject project) {
- super("Indexing project " + project.getName(), project);
- }
-
- @Override
- JobKind kind() {
- return JobKind.INDEX;
- }
-
- @Override
- protected IStatus doRun(IProgressMonitor monitor) {
- IStatus result = Status.OK_STATUS;
- final IProject project = getProject();
-
- monitor.beginTask("Indexing models in project " + project.getName(), IProgressMonitor.UNKNOWN);
-
- try {
- if (project.isAccessible()) {
- project.accept(getWorkspaceVisitor(monitor));
- } else {
- remove(project);
- }
-
- if (monitor.isCanceled()) {
- result = Status.CANCEL_STATUS;
- }
- } catch (CoreException e) {
- result = e.getStatus();
- } finally {
- monitor.done();
- }
-
- return result;
- }
-
- void setFollowup(ReindexProjectJob followup) {
- this.followup = followup;
- }
-
- @Override
- protected ReindexProjectJob getFollowup() {
- return followup;
- }
- }
-
- private class WorkspaceListener implements IResourceChangeListener {
- @Override
- public void resourceChanged(IResourceChangeEvent event) {
- final Multimap<IProject, IndexDelta> deltas = ArrayListMultimap.create();
-
- try {
- event.getDelta().accept(new IResourceDeltaVisitor() {
-
- @Override
- public boolean visit(IResourceDelta delta) throws CoreException {
- if (delta.getResource().getType() == IResource.FILE) {
- IFile file = (IFile) delta.getResource();
-
- switch (delta.getKind()) {
- case IResourceDelta.CHANGED:
- if ((delta.getFlags() & (IResourceDelta.SYNC | IResourceDelta.CONTENT | IResourceDelta.REPLACED)) != 0) {
- // Re-index in place
- deltas.put(file.getProject(), new IndexDelta(file, IndexDelta.DeltaKind.REINDEX));
- }
- break;
- case IResourceDelta.REMOVED:
- deltas.put(file.getProject(), new IndexDelta(file, IndexDelta.DeltaKind.UNINDEX));
- break;
- case IResourceDelta.ADDED:
- deltas.put(file.getProject(), new IndexDelta(file, IndexDelta.DeltaKind.INDEX));
- break;
- }
- }
- return true;
- }
- });
- } catch (CoreException e) {
- Activator.log.error("Failed to analyze resource changes for re-indexing.", e); //$NON-NLS-1$
- }
-
- if (!deltas.isEmpty()) {
- List<ReindexProjectJob> jobs = Lists.newArrayListWithCapacity(deltas.keySet().size());
- for (IProject next : deltas.keySet()) {
- ReindexProjectJob reindex = reindex(next, deltas.get(next));
- if (reindex != null) {
- jobs.add(reindex);
- }
- }
- schedule(jobs);
- }
- }
- }
-
- private static class IndexDelta {
- private final IFile file;
-
- private final DeltaKind kind;
-
- IndexDelta(IFile file, DeltaKind kind) {
- this.file = file;
- this.kind = kind;
- }
-
- //
- // Nested types
- //
-
- enum DeltaKind {
- INDEX, REINDEX, UNINDEX
- }
- }
-
- private class ReindexProjectJob extends AbstractIndexJob {
- private final IProject project;
- private final ConcurrentLinkedQueue<IndexDelta> deltas;
-
- ReindexProjectJob(IProject project, Iterable<? extends IndexDelta> deltas) {
- super("Re-indexing project " + project.getName(), project);
- this.project = project;
- this.deltas = Queues.newConcurrentLinkedQueue(deltas);
- }
-
- @Override
- JobKind kind() {
- return JobKind.REINDEX;
- }
-
- void addDeltas(Iterable<? extends IndexDelta> deltas) {
- Iterables.addAll(this.deltas, deltas);
- }
-
- @Override
- protected IStatus doRun(IProgressMonitor monitor) {
- IStatus result = Status.OK_STATUS;
-
- monitor.beginTask("Re-indexing models in project " + project.getName(), IProgressMonitor.UNKNOWN);
-
- try {
- for (IndexDelta next = deltas.poll(); next != null; next = deltas.poll()) {
- if (monitor.isCanceled()) {
- result = Status.CANCEL_STATUS;
- break;
- }
-
- try {
- switch (next.kind) {
- case INDEX:
- case REINDEX:
- process(next.file);
- break;
- case UNINDEX:
- remove(project, next.file);
- break;
- }
- } catch (CoreException e) {
- result = e.getStatus();
- break;
- } finally {
- monitor.worked(1);
- }
- }
- } finally {
- monitor.done();
- }
-
- return result;
- }
-
- @Override
- protected AbstractIndexJob getFollowup() {
- // If I still have work to do, then I am my own follow-up
- return deltas.isEmpty() ? null : this;
- }
+ /**
+ * Extension interface for index handlers that provide persistable index
+ * data associated with each file. This enables storage of the index in
+ * the workspace metadata for quick initialization on start-up, requiring
+ * re-calculation of the index only for files that were changed since the
+ * workspace was last closed.
+ *
+ * @param <T>
+ * the index data store type, which must be {@link Serializable}
+ * @since 2.1
+ */
+ public static interface PersistentIndexHandler<T> extends IndexHandler<T> {
+ /**
+ * Initializes the {@code index} data for a file from the persistent store.
+ *
+ * @param file
+ * a file in the workspace
+ * @param index
+ * its previously stored index
+ *
+ * @return whether the {@code index} data were successfully integrated.
+ * A {@code false} result indicates that the file must be indexed
+ * from scratch
+ */
+ boolean load(IFile file, T index);
}
- private static final class ContentTypeService extends ReferenceCounted<ContentTypeService> {
- private static ContentTypeService instance = null;
-
- private final ExecutorService serialExecution = new JobExecutorService();
-
- private final IContentTypeManager mgr = Platform.getContentTypeManager();
+ private final class IndexRecord {
+ private IFile file;
+ private long generation;
+ private T index;
- private ContentTypeService() {
+ IndexRecord(IFile file, T index) {
super();
- }
-
- synchronized static ContentTypeService getInstance() {
- ContentTypeService result = instance;
- if (result == null) {
- result = new ContentTypeService();
- instance = result;
- }
-
- return result.retain();
- }
-
- synchronized static void dispose(ContentTypeService service) {
- service.release();
- }
-
- @Override
- protected void dispose() {
- serialExecution.shutdownNow();
-
- if (instance == this) {
- instance = null;
- }
+ this.file = file;
+ this.generation = file.getModificationStamp();
+ this.index = index;
}
- IContentType[] getContentTypes(final IFile file) {
- Future<IContentType[]> futureResult = serialExecution.submit(new Callable<IContentType[]>() {
-
- @Override
- public IContentType[] call() {
- IContentType[] result = null;
- InputStream input = null;
-
- if (file.isAccessible()) {
- try {
- input = file.getContents(true);
- result = mgr.findContentTypesFor(input, file.getName());
- } catch (Exception e) {
- Activator.log.error("Failed to index file " + file.getFullPath(), e); //$NON-NLS-1$
- } finally {
- if (input != null) {
- try {
- input.close();
- } catch (IOException e) {
- Activator.log.error("Failed to close indexed file " + file.getFullPath(), e); //$NON-NLS-1$
- }
- }
- }
- }
-
- return result;
- }
- });
-
- return Futures.getUnchecked(futureResult);
+ boolean isObsolete() {
+ return file.getModificationStamp() != generation;
}
}
}

Back to the top