The central feature of OSEE is an extensible framework called the OSEE Application Framework. Default applications distributed with the OSEE framework are OSEE Define (for requirements management) and OSEE ATS (the Action Tracking System, for configuration management).
The Application Framework provides all the necessary services to allow the applications to persist and share data in a common, version controlled object database. Just as Eclipse provides the ability to add a plugin to the existing Eclipse environment, so OSEE allows other applications to add plugins and share the common data store.
And just like Eclipse RCP allows an application to be built and deployed using the Eclipse framework but not include all the standard applications like JDT, OSEE allows an application to be built and deployed using the OSEE Application Framework without including such applications as OSEE Define and OSEE ATS.
In order to attain a greater degree of scalability, the Open System Engineering Environment (OSEE) has been slowly migrating into a distributed architecture where clients interact with an application server, which is in charge of managing access to an OSEE data store.
Additionally, in an effort to provide load balancing, failure recovery, and code compatibility, clients consult an arbitration server before connecting to an application server. The arbitration server's responsibility is to keep track of all the application servers interacting with a common data store and direct clients to a healthy application server compatible with the client's OSEE code version. In this arrangement, arbitration servers act as the initial access points into the OSEE server cloud where a collection of application servers manage client requests to access and operate on a common OSEE data store. Figure 1 shows an example of the OSEE Client/Server network.
In Figure 1, three application servers interact with a single OSEE data store. The data store is comprised of a relational database and a remote file system used to store binary data. It is not necessary for the database and the binary data to exist on the same machine; the only requirement is that the application servers have access to both resources. Upon start-up, each application server registers himself on the data store's server lookup table by entering its host address, port, supported code versions, and its unique id. When the arbitration server receives a request to find an application server to support a client connection, the arbitration server reads the data store's server lookup table and selects the best match for the client. The client requests this information from the arbitration server upon start-up or whenever it can't communicate with an application server. It is important to note that the arbitration server does not have to be a different server than an application server. All application servers are able to act as an arbitration server. An application server is referred to as an arbitration server when clients interact with it in this context. Figure 2 depicts the sequence of events involved in the arbitration process.
Once a client receives an application server's address and port information, the client must authenticate with the application server before it can gain access to the OSEE data store. During the authentication process, a client submits to the application server the current user's credential information and the authentication protocol id to use during the process. The application server verifies the user via the selected protocol and grants access to the data store by creating a session for the user. From this point forward, the application server will be responsible for managing access to the data store by identifying the user via the session id. Whenever a client wants to interact with the application server, it will need to submit its session id in order to gain access to the OSEE data store. Figure 3 shows the sequence of events involved in the authentication process.
The OSEE framework is built around a user configurable and extensible data model consisting of attributes, artifacts, and relations. An attribute is a key-value pair representing a single data element such as a description, a date, a number, or a file. These basic data elements are grouped into artifacts. Artifacts can be configured to have any number of attributes. By default, an artifact will always have an attribute of type name
. In addition, artifacts can be related to one another via relations. By default, an artifact will always have a default hierarchy relation type. This allows artifacts to be connected together in a tree. In the example below, two instances of the basic artifact type are shown. Artifact 1 has an attribute of type name
set to string data "X"
. Artifact 2 has an attribute of type name
set to string data "Y"
. These two artifact instances are related via the default hierarchy relation type. Artifact 1 is Artifact 2's parent artifact.
Now that we have a basic understanding of the model, lets take a closer look at attributes and how they are defined.
An attribute is defined through its attribute type. The attribute type is a blue print for constructing attribute instances. It defines the type of data the will be held by the attribute, the data source or who provides it, how many instances can be created, default value to use during creation, whether the attribute can be tagged for word searches, and if the attribute holds file data, its file extension.
By default, data contained in the attribute can be represented through OSEE's basic data types:
OSEE provides three attribute data providers: the default attribute data provider, URI attribute data provider, and the Clob attribute data provider.
OSEE can be configured by setting certain Java system properties when launching Eclipse and by setting various attribute values on the Global Preferences artifact in the datastore. Java system properties are key/value pairs and can be passed as launch arguments in the form of -D
{key}=
{value} (i.e. -Dosee.authentication.protocol=trustAll
). These -D
options can be specified directly in the command to launch Eclipse or in the corresponding .ini file for the eclipse executable used. Server-side OSGI properties are specified in an JSON file referenced by the system property cm.config.uri.
See the file org.eclipse.osee.support.config/launchConfig/osee.postgresql.json for an example.
JdbcComponentFactory receives its OSGi properties from the JSON file referenced by the system property cm.config.uri. JdbcConnectionFactoryManager.getConnection() uses the JDK's DriverManager.getConnection() which in turn uses the Java Standard Edition Service Provider mechanism to load the JDBC driver referenced in the JSON file. The JDBC driver must include the file META-INF/services/java.sql.Driver which contains the name of the JDBC driver implementation of java.sql.Driver.
Do a Quick Search on the Common branch for "Global Preferences" and open the resulting artifact in the artifact editor. The available attribute types for this artifact define what can be configured. Each attribute is self-documenting, because the attribute tip text documents how to use each one.
System Property Name | Values | Default | Description |
---|---|---|---|
osee.connection.info.uri | [FILE SYSTEM PATH] | File system path or uri containing custom database connection information. | |
osee.db.connection.id | {db identifier} | Default from db.connection file | Specifies which database OSEE should connect to. This id references connection information specified in the ...db.connection.xml file. Refer to the Database Connection Information section for more information. |
osee.jini.forced.reggie.search | true, false | false | If true, adds the lookupList to the global lookup list such that a refresh will try to locate the service again |
osee.jini.lookup.groups | user defined group name | the Jini Group that all OSEE provided Jini services will register with | |
osee.log.default | FINE, INFO, WARNING, SEVERE | WARNING | the default logging level for all loggers |
osee.port.scanner.start.port | 1 - 65535 | 18000 | the first port number to test for availability when a new port is needed |
As described in the Architecture Section 1, the Clients can be configured to choose a particular server or group of servers. By specifying a server version, the arbitration server will pick only the application servers that are configured to work with the client. For instance, the configuration would make it possible to choose only servers in the same location as the clients. Steps: 1. Configure each application server on the local server machine(s) to support the local clients. a) Set the osee.version system property to a string that will provide a common property to use with the OSEE Client. Example: In the VM Arguments for the server startup, add: –Dosee.version=”localSiteName” b) Set the osee.application.server.data to a location on the server for the local copy of the application data Example: -Dosee.application.server.data=”path/to/local/data” Note: this local path could be rsync’d to another site to improve local data performance c) Set the http port to the port number for the client to access the server on Example: -Dorg.osgi.service.http.port=8092 2. Configure the OSEE Client to connect to the one of the servers as an arbitration server a) Set the osee.arbitration.server system property to the URL for one of the application servers configured in step 1. Example: -Dosee.arbitrations.server=http://your.server.com:8092 b) Set the osee.version system property to match the application server(s) Example: –Dosee.version=”localSiteName”
/******************************************************************************* * Copyright (c) 2012 Boeing. * All rights reserved. This program and the accompanying materials * are made available under the terms of the Eclipse Public License v1.0 * which accompanies this distribution, and is available at * http://www.eclipse.org/legal/epl-v10.html * * Contributors: * Boeing - initial API and implementation *******************************************************************************/ ${package_declaration} /** * @author Joe P. Schmoe */ ${typecomment} ${type_declaration}
OseeLog.log(Activator.class, Level.SEVERE, ${exception_var});
The data model in OSEE is extensible and runtime user configurable without modification to code or the database schema. Users can define new artifact, attribute, and relation types and their constraints such as multiplicity and applicability. Type inheritance allows similar types to be defined and modified without tedious redundancy because the types inherit what is common from their super type.
The OSEE data model is defined using a XText grammar designed by the OSEE Team. This allows for editing of the types (object model) configuration much the way you would edit source code. This includes command completion and error notation when an incorrect syntax or keyword is used.
Example of the OSEE Types Editor
Command Completion Example
Error Handling Example
The OSEE types definitions are stored in artifacts and cached during startup. They are edited in OSEE like any other artifact. Simply select the artifact > right-click > open with > OSEE DSL Editor. Convention is to root them off the Common Branch default hierarchy.
All artifact types extend the type "Artifact". The snippet below of the artifact type definition shows some types required by all OSEE configurations.
artifactType "Artifact" { <-- Main Artifact type id 1 <-- Unique long artifact id attribute "Name" <-- List of Attributes that are valid for this Artifact Type attribute "Annotation" attribute "Content URL" attribute "Static Id" attribute "Relation Order" } artifactType "User" extends "Artifact" { <-- User artifact extending Artifact id 5 attribute "Active" <-- Adding more attributes to those inherited from Artifact attribute "Phone" attribute "Email" attribute "Street Address" attribute "Dictionary" ... }
Attribute types define characteristics (fields) of an artifact. They are strongly typed which supports data validation and editors and applications can know how to handle the values returned.
Here's an example of the "Name" attribute that exists on every Artifact.
attributeType "Name" extends StringAttribute { <-- extends one of the base Attribute Types id 1152921504606847088 <-- unique long id of attribute type dataProvider DefaultAttributeDataProvider <-- different dataProviders exist to store data differently min 1 <-- minimum number of Attributes per artifact. can be 0..n max 1 <-- maximum number of Attributes allowed. can be 1..n taggerId DefaultAttributeTaggerProvider <-- defines the tagger is used to split the value for searching description "Descriptive Name" <-- description of what this Attribute stores defaultValue "unnamed" <-- default value to be used when min==1 mediaType "text/plain" <-- media types for this Attribute } oseeEnumType "enum.req.subsystem" { <-- Valid Enumerated values for the Subsystem attribute id 3458764513820541310 entry "Robot_API" entry "Robot_Survivability_Equipment" entry "Robot_Systems_Management" entry "Chassis" ... } attributeType "Subsystem" extends EnumeratedAttribute { <-- Enumerated Attribute definition id 1152921504606847112 dataProvider DefaultAttributeDataProvider min 1 max 1 taggerId DefaultAttributeTaggerProvider enumType "enum.req.subsystem" <-- Enumeration definition from above defaultValue "Unspecified" mediaType "text/plain" }
Valid Attribute Base-Types are:
Relations provide bi-directional links between artifacts on a branch. Like Artifact and Attribute types, Relation types are strongly typed. You can identify which artifact types are allowed to be on each side of the relation. You can also specify the multiplicity.
relationType "Code-Requirement" { <-- Relation type name id 2305843009213694296 <-- Unique long id sideAName "code" <-- Name of the Artifacts on side A sideAArtifactType "Code Unit" <-- Valid Artifact Type for side A sideBName "requirement" <-- Name of the Artifacts on side B sideBArtifactType "Requirement" <-- Valid Artifact Type for side B defaultOrderType Unordered <-- Default Order Type multiplicity MANY_TO_MANY <-- Multiplicity (any number of code can relate to any number of req) } relationType "Component-Requirement" { id 2305843009213694297 sideAName "component" sideAArtifactType "Component" sideBName "requirement" sideBArtifactType "Requirement" defaultOrderType Unordered multiplicity ONE_TO_MANY <-- Multiplicity (one component can relate to multiple requirements) }
As described above, you can use command completion to see the valid values. An example is "multiplicity".
As of the 24.0 line, the OSEE Types and Access Control Artifacts are "Versioned". This allows the "Production" version of the code to use one version of the types while another release line is being developed or prepared for release.
Prior to 0.24.0, types were loaded based on the Artifact Type. After this, types are loaded by using the OrcsTypesData.OSEE_TYPE_VERSION to index into the tuple table and grab the gamma_ids of the types attributes to load. Thus this code variable needs to match up with the tuple entries for the new version of the types sheet(s).
To create a new types version
Downloading and Configuring Eclipse
git reset --hard origin/dev **only do when you have NO uncommitted changes
JUnit Method Rules:
eclipsec -application org.eclipse.osee.framework.database.init.configClient -vmargs -Xmx512m -Dosee.log.default=INFO -Dosee.application.server=http://localhost:8089 -Dosee.authentication.protocol=trustAll -Dosee.prompt.on.db.init=false -Dosee.choice.on.db.init="Base - for importing branches" |
tag_all
SELECT * FROM osee_tx_details WHERE branch_id = ? AND tx_type = 0;
SELECT UNIQUE(gamma_id) FROM osee_tx_details txd, osee_txs txs1 WHERE txd.branch_id = ? AND tx_type = 0 AND txd.branch_id = txs1.branch_id AND txd.transaction_id = txs1.transaction_id AND NOT EXISTS (SELECT 1 FROM osee_txs txs2 WHERE txs1.gamma_id = txs2.gamma_id AND txs2.branch_id <> 6277884563228332544) ORDER BY gamma_id;
SELECT URI FROM osee_tx_details txd, osee_txs txs1, osee_attribute att WHERE txd.branch_id = ? AND tx_type = 0 AND txd.branch_id = txs1.branch_id AND txd.transaction_id = txs1.transaction_id AND NOT EXISTS (SELECT 1 FROM osee_txs txs2 WHERE txs1.gamma_id = txs2.gamma_id AND txs2.branch_id <> 6277884563228332544) AND txs1.gamma_id = att.gamma_id AND URI IS NOT NULL;
DELETE FROM osee_relation_link WHERE gamma_id IN (SELECT gamma_id FROM osee_tx_details txd, osee_txs txs1 WHERE txd.branch_id = ? AND tx_type = 0 AND txd.branch_id = txs1.branch_id AND txd.transaction_id = txs1.transaction_id AND NOT EXISTS (SELECT 1 FROM osee_txs txs2 WHERE txs1.gamma_id = txs2.gamma_id AND txs2.branch_id <> 6277884563228332544)); DELETE FROM osee_artifact WHERE gamma_id IN (SELECT gamma_id FROM osee_tx_details txd, osee_txs txs1 WHERE txd.branch_id = ? AND tx_type = 0 AND txd.branch_id = txs1.branch_id AND txd.transaction_id = txs1.transaction_id AND NOT EXISTS (SELECT 1 FROM osee_txs txs2 WHERE txs1.gamma_id = txs2.gamma_id AND txs2.branch_id <> 6277884563228332544)); DELETE FROM osee_attribute WHERE gamma_id IN (SELECT gamma_id FROM osee_tx_details txd, osee_txs txs1 WHERE txd.branch_id = ? AND tx_type = 0 AND txd.branch_id = txs1.branch_id AND txd.transaction_id = txs1.transaction_id AND NOT EXISTS (SELECT 1 FROM osee_txs txs2 WHERE txs1.gamma_id = txs2.gamma_id AND txs2.branch_id <> 6277884563228332544));
Configuring ATS for Change Tracking
the extension point org.eclipse.osee.framework.ui.skynet.BlamOperation
can be used to contribute a custom OSEE operation that provides the developer a very quick way to define the graphical interface that supplies the operation with the user specified parameters. org.eclipse.osee.framework.ui.skynet.blam.operation.ChangeArtifactTypeBlam
provides a simple example.
References
**Note:** *Tycho/Maven build support available for source code versions 0.9.9_SR6 and higher.*
Assuming the following layout:
/UserData/org.eclipse.osee
machine@user /UserData/org.eclipse.osee/plugins/org.eclipse.osee.parent:
$mvn clean verify
Path | Artifact | Description |
---|---|---|
plugins/org.eclipse.osee.client.all.p2/target/ | repository/ | OSEE IDE Client P2 Site |
org.eclipse.osee.client.all.p2.zip | OSEE IDE Client P2 Archived Update Site | |
plugins/org.eclipse.osee.client.all.product/target/products/ | build_label.txt | OSEE Build Information |
org.eclipse.osee.ide.id-linux.gtk.x86.tar.gz | OSEE Client IDE All-In-One Linux x86 | |
org.eclipse.osee.ide.id-linux.gtk.x86_64.tar.gz | OSEE Client IDE All-In-One Linux x86 64-bit | |
org.eclipse.osee.ide.id-win32.win32.x86.zip | OSEE Client IDE All-In-One Win32 x86 | |
org.eclipse.osee.ide.id-win32.win32.x86_64.zip | OSEE Client IDE All-In-One Win32 x86 64-bit | |
plugins/org.eclipse.osee.x.server.p2/target | repository/ | OSEE Application Server P2 Site |
server/ | OSEE Application Server | |
org.eclipse.osee.x.server.p2.zip | OSEE Application Server Archived Update Site | |
org.eclipse.osee.x.server.runtime.zip | OSEE Application Server Zipped Runtime | |
====OSEE System Requirements====
The following steps walk a developer through defining the classes necessary to begin importing coverage data into the OSEE application. Please keep in mind that these are meant to be simplified examples and developers are encouraged to 'get creative' when adapting these examples to their own particular context.
1. Write a class that extends AbstractCoverageBlam
public class MyCoverageImportBlam extends AbstractCoverageBlam { public static String COVERAGE_IMPORT_DIR = "Coverage Import Directory"; public static String NAMESPACE = "Code Namespace"; @Override public String getName() { return "My Coverage Import"; } @Override public Collection<String> getCategories() { return Arrays.asList("Blams"); } @Override public String getDescriptionUsage() { return "Import coverage from coverage directory."; } @Override public void runOperation(final VariableMap variableMap, IProgressMonitor monitor) throws Exception { try { final String coverageInputDir = variableMap.getString(COVERAGE_IMPORT_DIR); if (!Strings.isValid(coverageInputDir)) { throw new OseeArgumentException("Must enter valid filename."); } final String namespace = variableMap.getString(NAMESPACE); if (!Strings.isValid(namespace)) { throw new OseeArgumentException("Must enter valid namespace."); } File file = new File(coverageInputDir); if (!file.exists()) { throw new OseeArgumentException("Invalid filename."); } MyCoverageImporter myCoverageImporter = new MyCoverageImporter(coverageInputDir, namespace); CoverageImport coverageImport = myCoverageImporter.run(monitor); setCoverageImport(coverageImport); } catch (Exception ex) { OseeLog.log(Activator.class, OseeLevel.SEVERE_POPUP, ex); } } @Override public String getXWidgetsXml() { StringBuffer buffer = new StringBuffer("<xWidgets>"); buffer.append("<XWidget xwidgetType=\"XDirectorySelectionDialog\" " + getDefaultDirectory() + " displayName=\"" + COVERAGE_IMPORT_DIR + "\" />"); buffer.append("<XWidget xwidgetType=\"XText\" displayName=\"" + NAMESPACE + "\" />"); buffer.append("</xWidgets>"); return buffer.toString(); } private String getDefaultDirectory() { if (CoverageUtil.isAdmin()) { return " defaultValue=\"C:\\UserData\" "; } return ""; } }
2. Define a class that implements ICoverageImporter
public class MyCoverageImporter implements ICoverageImporter { private final String coverageInputDir; private final String namespace; private final CoverageImport coverageImport = new CoverageImport("My Coverage Import"); public MyCoverageImporter(String coverageInputDir, String namespace) { this.coverageInputDir = coverageInputDir; this.namespace = namespace; } @Override public String getName() { return "My Coverage Importer"; } @Override public CoverageImport run(IProgressMonitor progressMonitor) throws OseeCoreException { /* * Use any member variables to populate coverageImport */ return coverageImport; } }
3. Add extension point declaration to package's plugin.xml
<?xml version="1.0" encoding="UTF-8"?> <?eclipse version="3.4"?> <plugin> <extension point="org.eclipse.osee.framework.ui.skynet.BlamOperation"> <Operation className="com.my.coverage.MyCoverageImportBlam"> </Operation> </extension> </plugin>
Swagger is an in-depth web/UI-based API documentation and interactive tool, which is now incorporated into the OSEE framework. Below are instructions on using and updating the tool.
The Swagger web application is located here: http://{host}/swagger/index.html, where {host} is the top-level domain of your organization, or http://localhost:{port} when running a local server on a development machine.
The default "definition" API endpoints will be shown and are based on the Select a definition dropdown field selection in the upper-right hand corner of the page. APIs are grouped by definition, which are pre-generated files we produce and continually update. The Swagger application utilizes these custom definition files, currently in JSON format, to display a list of API endpoints and their features defined in its respective definition file.
The Servers dropdown field contains only one selection for now, but can be used to specify alternate, specific servers in the future.
Just below the Servers field, you'll find the Filter by tag text field. Swagger tags are custom searchable keywords we provide for groups of endpoints. They allow quick filtering of endpoints the user may be interested in, as the complete list of endpoints per definition may be very lengthy.
The Swagger definition files are generated by package-specific Java applications located under each targeted package. They can be Run or Debugged individually or all in tandem by running or debugging the parent SwaggerGenerator application located under org.eclipse.osee.ats.ide.integration.test.util package.
An existing Swagger-annotated class has the @Swagger annotation placed at the class level. When adding a new endpoint to an existing and properly set up Swagger-annotated class, the new endpoint will be picked up automatically as long as the endpoint is annotated with a @Path annotation, with these points in mind:
A "Swagger-aware package" contains its own custom Swagger Generator class/application, which scans for all Swagger-annotated classes under that particular package. A Swagger-annotated class contains the @Swagger annotation at the class level. When adding a new Swagger-annotated class to a Swagger-aware package, add the @Swagger annotation at the class level for this new class. For any containing endpoints to be picked up by the Swagger web application, they must have a @Path annotation, at minimum.
In order for all Swagger-annotated classes under a particular package to be scanned, and subsequently added to the Swagger web application, a Generator Class needs to be implemented within that package. See the following code example below. This particular class has generic tagging implemented, which provides out-of-the-box searchable tagging support for Swagger-annotated classes with no Swagger endpoint documentation tags implemented yet.
public class DefineApiSwaggerGenerator { private static final String definitionPath = "../org.eclipse.osee.web.ui/src/swagger/definitions/"; // Only one period in the definition file name is supported private static final String definitionFile = "org_eclipse_osee_define_api.json"; private static final String infoTitle = "Define API Endpoint Definitions"; private static final String infoDescription = "Allows interactive support for Define API endpoints."; private static final String serverUrl = "/define"; private static final String serverDescription = "Define"; public static void main(String[] args) { Set<Class<?>> allClasses = Lib.getAllClassesUnderPackage("org.eclipse.osee.define.api"); allClasses.addAll(Lib.getAllClassesUnderPackage("org.eclipse.osee.define.api.publishing")); allClasses.addAll(Lib.getAllClassesUnderPackage("org.eclipse.osee.define.api.publishing.datarights")); allClasses.addAll(Lib.getAllClassesUnderPackage("org.eclipse.osee.define.api.publishing.templatemanager")); allClasses.addAll(Lib.getAllClassesUnderPackage("org.eclipse.osee.define.api.synchronization")); allClasses.addAll(Lib.getAllClassesUnderPackage("org.eclipse.osee.define.api.toggles")); Set<Class<?>> swaggerClasses = new HashSet<Class<?>>(); for (Class<?> clazz : allClasses) { if (clazz.isAnnotationPresent(Swagger.class)) { swaggerClasses.add(clazz); } } System.out.println("Creating Swagger " + definitionFile + " definitions file. Please wait..."); // Read in all applicable classes, creating initial Swagger openAPI definition object OpenAPI openAPI = new Reader(new OpenAPI()).read(swaggerClasses); Info info = new Info(); info.setTitle(infoTitle); info.setDescription(infoDescription); openAPI.setInfo(info); Server server = new Server(); server.setUrl(serverUrl); server.setDescription(serverDescription); openAPI.addServersItem(server); // Add searchable tagging support to groups of endpoints Map<String, PathItem> taggedPaths = openAPI.getPaths().entrySet().stream().map(entry -> new AbstractMap.SimpleEntry<>(entry.getKey(), addTagsToPathItem(entry.getKey(), entry.getValue()))).collect( Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue)); Paths paths = new Paths(); paths.putAll(taggedPaths); openAPI.setPaths(paths); try (FileWriter fr = new FileWriter(definitionPath + definitionFile)) { fr.write(Json.mapper().writeValueAsString(openAPI)); } catch (JsonProcessingException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } System.out.println("Swagger " + definitionFile + " definitions file created."); System.out.println(""); } private static PathItem addTagsToPathItem(String path, PathItem pathItem) { String pathElements[] = path.split("/"); if (pathItem.getGet() != null) { pathItem.getGet().addTagsItem(pathElements[1]); } if (pathItem.getDelete() != null) { pathItem.getDelete().addTagsItem(pathElements[1]); } if (pathItem.getHead() != null) { pathItem.getHead().addTagsItem(pathElements[1]); } if (pathItem.getPatch() != null) { pathItem.getPatch().addTagsItem(pathElements[1]); } if (pathItem.getPost() != null) { pathItem.getPost().addTagsItem(pathElements[1]); } if (pathItem.getPut() != null) { pathItem.getPut().addTagsItem(pathElements[1]); } if (pathItem.getTrace() != null) { pathItem.getTrace().addTagsItem(pathElements[1]); } if (pathItem.getOptions() != null) { pathItem.getOptions().addTagsItem(pathElements[1]); } return pathItem; } }
Here is an example of a class with no generic tagging implemented. All of the enpoint classes under this particular package, for which the generator class is handling have Swagger documentation tagging implemented for each endpoint instead. This customized documentation tagging takes better advantage of Swagger's capabilities, however takes more work documenting each endpoint. For classes under packages that have not been documented yet, generic tagging may be used in the meantime.
public class DispoSwaggerGenerator { private static final String definitionPath = "../org.eclipse.osee.web.ui/src/swagger/definitions/"; // Only one period in the definition file name is supported private static final String definitionFile = "org_eclipse_osee_disposition_rest.json"; private static final String infoTitle = "Dispo API Endpoint Definitions"; private static final String infoDescription = "Allows interactive support for Dispo API endpoints."; private static final String serverUrl = "/dispo"; private static final String serverDescription = "Dispo"; public static void main(String[] args) { Set<Class<?>> allClasses = Lib.getAllClassesUnderPackage("org.eclipse.osee.disposition.rest.resources"); Set<Class<?>> swaggerClasses = new HashSet<Class<?>>(); for (Class<?> clazz : allClasses) { if (clazz.isAnnotationPresent(Swagger.class)) { swaggerClasses.add(clazz); } } System.out.println("Creating Swagger " + definitionFile + " definitions file. Please wait..."); // Read in all applicable classes, creating initial Swagger openAPI definition object OpenAPI openAPI = new Reader(new OpenAPI()).read(swaggerClasses); Info info = new Info(); info.setTitle(infoTitle); info.setDescription(infoDescription); openAPI.setInfo(info); Server server = new Server(); server.setUrl(serverUrl); server.setDescription(serverDescription); openAPI.addServersItem(server); System.out.println("Swagger " + definitionFile + " definitions file created."); System.out.println(""); } }
1. Add the new class to the parent SwaggerGenerator.java class:
public class SwaggerGenerator { public static void main(String[] args) { DefineApiSwaggerGenerator.main(args); MimSwaggerGenerator.main(args); OrcsSwaggerGenerator.main(args); AtsApiSwaggerGenerator.main(args); DispoSwaggerGenerator.main(args); // Add new Swagger Generator class here } }
2. Add the new definition file reference URL to the swagger-initializer.js file under org.eclipse.osee.web.ui/src/swagger/node_modules/swagger-ui-dist/ Make sure it is placed in alphabetical order based on its "name" attribute in the "urls" array:
window.onload = function() { //<editor-fold desc="Changeable Configuration Block"> // the following lines will be replaced by docker/configurator, when it runs in a docker-container window.ui = SwaggerUIBundle({ // Alphabetical: urls: [ { url: "/swagger/definitions/org_eclipse_osee_ats_api.json", name: "ATS API Endpoints" }, { url: "/swagger/definitions/org_eclipse_osee_define_api.json", name: "Define API Endpoints" }, { url: "/swagger/definitions/org_eclipse_osee_disposition_rest.json", name: "Dispo API Endpoints" }, { url: "/swagger/definitions/org_eclipse_osee_mim.json", name: "MIM API Endpoints" }, { url: "/swagger/definitions/org_eclipse_osee_orcs_rest.json", name: "Orcs Endpoints" } ], dom_id: '#swagger-ui', filter: true, configUrl: '/swagger/swagger-config.json', deepLinking: true, presets: [ SwaggerUIBundle.presets.apis, SwaggerUIStandalonePreset ], plugins: [ SwaggerUIBundle.plugins.DownloadUrl ], layout: "StandaloneLayout", supportedSubmitMethods: [ "get", "head" ] }); //</editor-fold> };
3. For any classes to picked up by the Swagger web application for this new definition, the @Swagger annotation must be added to any relevant classes at the class level, and any relevant endpoint needs to be annotated with the @Path annotation, at minimum.
As stated above, generic tagging may be utilized in the meantime until all endpoints under a particular Swagger-aware package are fully documented with Swagger documentation tagging. For a full description of supported Swagger annotation tagging, refer to the Swagger 2.X Annotations documentation here:
Below is an example of Swagger documentation annotation tagging on an enpoint. In this case, the Swagger-specific documentation tags are:
@Path("{name}") @POST @RolesAllowed(DispoRoles.ROLES_ADMINISTRATOR) @Consumes(MediaType.APPLICATION_FORM_URLENCODED) @Produces(MediaType.APPLICATION_JSON) @Operation(summary = "Create a new Disposition Set given a name, dispoType, and path") @Tags(value = {@Tag(name = "create"), @Tag(name = "set")}) @ApiResponses(value = { @ApiResponse(responseCode = "201", description = "OK. Created the Disposition Set"), @ApiResponse(responseCode = "409", description = "Conflict. Tried to create a Disposition Set with same name"), @ApiResponse(responseCode = "400", description = "Bad Request. Did not provide both a Name and a valid Import Path")}) public Response postDispoSetByName( @Parameter(description = "String used to specify the directory to populate the set", required = true) @FormParam("path") String importPath, @Parameter(description = "String used to name the Set", required = true) @PathParam("name") String name, @Parameter(description = "String used to specify if using disposition vs coverage", required = true) @QueryParam("dispoType") String dispoType, @QueryParam("userName") String userName) { DispoSetDescriptorData descriptor = new DispoSetDescriptorData(); descriptor.setName(name); descriptor.setImportPath(importPath); descriptor.setDispoType(dispoType); return postDispoSet(descriptor, userName); }
Below is a link to a commit which removes generic tagging and adds custom Swagger documentation tagging for all endpoints in an existing Swagger-aware package:
For any endpoint classes with a class @Path annotation that are called by another class with a @Path annotation, only the parent class should be annotated with the @Swagger annotation. This is because Swagger cannot decipher the parent/child relationship of the two @Path annotations, and subsequently any calls to the child class directly will fail. An example is the BranchesResource.java class calls a number of other endpoint classes including TupleEndpoint, ApplicabilityEndpoint, and others. Therefore, those classes do not implement the @Swagger annotation, as it is already picked up by the "parent" BranchesResources class.
In cases where a group of endpoints share the same annotations in whole or in part, implementing a new annotation @interface containing these common annotations will reduce redundancy and make for cleaner code. Below is an example of a common Swagger annotation @interface containing these common shared annotations. In this case, when adding the @SwaggerCommonOrcsAnnotations annotation to a group of endpoints, they will all implement the Swagger @ApiResponse annotations shown below:
@Documented @Retention(RetentionPolicy.RUNTIME) @Target(ElementType.METHOD) @ApiResponses(value = { @ApiResponse(responseCode = "200", description = "Successful"), @ApiResponse(responseCode = "400", description = "Content not found")}) public @interface SwaggerCommonOrcsAnnotations { // }
OSEE provides a simple mechanism to contribute static web resources in the MANIFEST.MF. The header "Osee-JaxRs-Resource" allows specifying a path in the bundle and its mapping to a URL. See org.eclipse.osee.ats.rest/META-INF/MANIFEST.MF for an example. The resource(s) at that path will be severed by the embedded web server in the OSEE server at the specified URL.
yes. Everything except:
config.ini
org.eclipse.equinox.simpleconfigurator
The simple answer is BOTH. OSEE Application Framework is created to allow applications to be built on top and share the common data model. This can be used independently of any OSEE applications. In addition, there are applications that are delivered with and use the OSEE Application Framework. This includes a full featured Requirements and Document Management System (OSEE Define), a powerful change tracking and configuration management application (OSEE ATS - Action Tracking System), a fully customizable peer-review module and other project, reporting and metrics tools. These application can be used out-of-the-box and new applications can be created or integrated on the framework to share and contribute to the same data.
No. Although OSEE was created to handle the complexity of a large US Department of Defense program, it was architected to support any systems engineering project from a simple application built for a single customer to a large complex application. In addition, since OSEE is an independent application, the OSEE development team uses OSEE to develop, deploy and maintain OSEE.
Skynet is a legacy term for the persistence portion of the OSEE Application Framework.
OSEE provides Artifacts, Attributes and Relations that are strongly typed. This means that the user can create their own artifact type, for example a "Software Requirement" to represent the requirements at a software level and their own attribute types, for example a "Qualification" attribute or a "Safety Criticality" attribute and event their own relations, for example a "Software Requirement to Allocation" relation. These types are defined in the Artifact Framework and can be created dynamically in the system during database creation or while running. This allows the end user to expand the data that is being stored in OSEE without providing a new release.
The Action Tracking System is the tightly integrated configuration management system built in OSEE and very tightly integrated with the OSEE Application Framework. It uses a powerful workflow engine to provides a fully customizable workflow to track improvements, problems and support for any number of teams/tools/programs simultaneously. This gives the user a single point view into all the work that they are required to do.
Although there are a number of open source and commercial bug tracking systems available, OSEE's goal to integrate workflow management and provide a tight integration with the Application Framework, and the applications built on top, required us to develop ATS. ATS is meant to be more than simple bug tracking since it can be used to manage multiple teams working on multiple products or support simultaneously. This means that you can create a single "Action" to "Fix the XYZ capability" that will create the necessary workflows for all the teams that need to perform work. For example, a workflow may be created for not only the Software Development team, but also the test team, documentation team, integration team and even facilities like labs or conference rooms. Each team then moves independently through its workflow to perform the work necessary for the common "Action". In addition, ATS enables complete customization of different workflows for each configured team. This means that the documentation team can follow their own "process" which may contain 5 different states while the application developers can follow their own more complex "process" which may contain 30 different states.
Traceability is handled in OSEE through the use of Relations. These relations can be defined in OSEE according to their need and the users can then add and remove these relations throughout the lifecycle of the requirements or other artifacts. Deliverable documents or any report generation would also use this traceability.
OSEE Define is OSEE's advanced Requirements and Document Management System. OSEE Define can be used to track a simple application's requirements, code and test or configured to support a large program doing concurrent development with multiple parallel builds and manage requirements for multiple product lines simultaneously. Although any application file (document) can be stored and managed, OSEE Define is tightly integrated with Microsoft Word(c) to store and manage individual requirement objects (stored in XML) and provide advanced features like index based searching and showing differences between historical changes. Integrated tightly with the Action Tracking System, OSEE Define can be configured to provide advanced configuration management for any set of requirements object.