Pages

On This Page

Share with Colleagues

Loading sharing buttons...

Subscribe

Enter our mailing list to stay informed of updates!


I agree to receive promotional material about Typal IDL for Closure™ Compiler and other innovative Art Deco™ software.
hide menu hide menushow menu show menu

Typal Developer-friendly IDL and model transformer for Web Engineering

Developer-friendly IDL and model transformer for Web Engineering

* Closure is a registered trademark of Google Inc.

Introduction

This webpage is about Typal — an interface definition language aiming at helping web engineers to separate their modelling activity from coding by organising designs from outside the source code. The idea is similar to UML class diagrams: to define interfaces and map relationships between them, but not many people really use UML in practice: they find it's slowing them down without providing much benefit as its brittle code-generation capability does not satisfy their needs.

On the other hand, Typal converts XML models to a number of relevant to JavaScript targets: typedefs for VSCode IDE, externs for Closure compiler, protobufs for binary data serialisation and stubs for RPC. Professional version of Typal includes the ability to write custom transforms on Type Object Model (TOM) tree. Our first milestone is to equip JS programmers with the right tools first and then to expand to other programming languages, but it could already be of interest to full-stack engineers too.

There are many conscientious professionals who love to work with good-old JavaScript, but the industry-wide lack of recognition of the need for expert engineering tools rather than panaceas, is keeping them back. Our effort is thus twofold: first, to restore the status of the language by proving that it's absolutely possible to achieve the same level of developer experience with plain JS (given the right tools); and second, to unlock the possibility of building holistic web apps by applying fundamental principles of Model-Driven Web Engineering.

Different levels of the producer-consumer hierarchy are not permitted to use specialized tools for their specialized tasks, skills, and interests, but must fit themselves to the latest panacea.

Brad Cox, co-author of Objective-C, in Planning the software industrial revolution

Abstract

The structure of the page is as follows: first we describe how Typal can be used as the single source of truth for generating typedefs for IDE and externs for the compiler, then introduce a new concept of runtypes that work similar to externs but allow to mangle properties while performing type-checking; then demonstrate some power-features such as example embedding into the designs and argument-records i.e. arcs; then showcase how tools like Depack is used for bundling and Front-End Middleware for active development; and finally provide an insight into the type-engineer runtime library for trait inheritance.

Interfaces, IDLs and Models

To start off, we need to explain the purpose of interfaces and interface definition languages. In Software Engineering, the term "software design" is a commonplace as everybody recognises that besides programming, there is a job of organising our ideas and observations through modelling: by studying real-world phenomena that we try to emulate with software (such as any enterprise activity), we can transfer our knowledge of their structure and interrelations into models via the design process. The mid-level specification (below requirements but above code) of what objects can do and what parameters they accept for each operation is known as interfaces.

In other words, an interface is just a class without any real code attached to it, but with strong-typing information about its methods and fields. This means there can be numerous implementations of the same interface (even from various vendors), which are substitutable by the integrator since they followed the same specification. With this clear separation of design from code, it is much easier to reason about our software systems and their context at large, without dwelling into the implementation details. Utilising interfaces is almost law in the software industry, as everyone will know the following rule:

Program to an interface, not an implementation. Don't declare variables to be instances of particular concrete classes. Instead, commit only to an interface defined by an abstract class.

Erich Gamma, in Design Patterns: Elements of Reusable Object-Oriented Software

 Circular Dependencies

Although we have claimed that the above rule is almost law in industry, some people might remain sceptical as they don't see a clear problem with programming to classes: instead, one might choose to import a class and use it for references in annotations. So is there any real problem that presents itself if we break the aforementioned rule? Yes, there is a severe problem which we have no way around without employing interfaces.

It is called the circular dependencies (or cyclic dependencies, CD). It takes place when one module imports another, while the latter imports the original one "back". This creates a cycle in the static analysis graph and might prevent the correct working of the program. We say might, because ECMA modular system will not throw an error when CDs are detected (unlike F#, for example), but only when the imported entity is referenced, so by referencing it in JSDoc comments could actually work. However, there are some cases when things really break down.

This example is taken straight from Typal's own design. Firstly, we define a trait (more on traits later) named Callable that has an args field with the list of arguments instantiated by instances of Callable themselves during the construction phase.

Secondly, we anticipate two kinds of args: a standard arg, and a callback which is a special version of the standard arg that can be invoked. The problem is that the Callback has to extend Callable to become executable, but importing it will not work as Callable has already imported the Callback's constructor.

This forces us to place the Callable and Callback classes into the same file, resulting in tight-coupling which is not ideal.

One of advantages of OOP is the addition of runtime binding layer, so that we can move constructors from hard-coded to fields: the Callable class now has the ArgC and CallbackC fields to store constructor references to be accessed at runtime, instead of importing them statically.

Extracting constructors from fields removes the edge between Callable and Callback: the latter now can extend the former without forming a cycle. But we still need to fill those fields with constructor pointers. To do that, we traverse one layer up and add them to the Function class as getters.

The new, loosely-coupled design allows access constructors dynamically. However, to @annotate the ArgC and ConstructorC fields with types, we would still need to import constructors statically, breeding ... a cycle 🤷‍♀️

To be fully honest with you, the new cycle would not as bad as the first one, since the imported Callback class in the Callable module will not be used in code and only in comments, so we won't be using the Callable variable for computations which means the runtime won't throw an error Cannot access 'Callable' before initialization. But this is true only under one condition: if and only if Callback is imported before Callable in Function. This is because the Callback class is actually using Callable to fill its extends binding hook, whereas the latter is merely referencing the former in a JSDoc comment, so the order matters.

Despite that we could get things working again, solving the problem by rearranging module imports is not such a great idea: the decision is opaque (unless a comment is provided) and might lead to confusion by other team members. Additionally, there is a feature of IDEs called "organise imports" that is invoked often and sorts alphabetically, so developers will have to constantly engage in rearrangement of modules themselves.

To sum up, the circular dependency problem is not healed by introduction of interfaces but requires a careful application of an appropriate design pattern in each particular situation (we used dependency injection via fields, but other patterns like Facade might also help). What interfaces can do, is to remove the need to import constructors for the purpose of annotating fields thus creating cycles in the first place.

section break

Model-Driven Approach

Most programming languages' compilers include a type-system that validates the correct use of types in source code. But to say that interfaces are used for type-checking is not to give the full picture: such statement falls back to the code-centric view of software, while we are trying to think in terms of software design. Of course, engineering of software is an iterative process, especially in today's agile environments, but we must remember that interface design is not done so that the code can be type-checked, but to define a MODEL, which can then be transposed into implementation(s) and then to formally validate that all such implementations meet the model's standards. To paraphrase, the interfaces are then not just type incubators for source code to be checked against, but the model's components. Model-Based Engineering has become the standard in other mature disciplines such as Systems Engineering and automotive which have recognised its tremendous benefits, and it is ought to in software as well.

Many people have complained about JavaScript's lack of interfaces and/or syntax for types, however we believe that it is actually it's strong side (pun intended): unlike Java and TypeScript whose type systems and its related syntax are set in stone, JavaScript does not require adherence to any type requirements (as it does not have static types), making it possible to build other technologies upon it (including Dart, Flow, etc). It then becomes the responsibility of tooling to support operations related to types during every stage of the development process. There can be many such tooling pieces (IDE, compiler), but they all have one thing in common: they all utilise a model which is described using an IDL and mapped into the tool's bindings to perform their function. We give four most obvious examples of tools and their bindings below, but you if can think of your own novel ways to benefit from model transformation, let us know in the comments section at the end of the page:

Source Code abstract classes

Builds a bridge between implementation and its interface, can add method bodies and field values.

Generator type trees

Enables general purpose model transformations to improve productivity and establish transclusion.

Interop protobufs

Converts data structures between programming languages (e.g., back-end and front-end) via stubs.

IDE typedefs

Provides the developer with suggestions for autocompletion and fast access to documentation.

Compiler headers

Performs formal verification by type-checking variables and inheritance chains in the source code.

Database schemas

Forward-engineers a physical submodel, produces patches, allows easy porting between vendors.

In the figure above, we have illustrated 4 targets for model-transformations of interfaces. Their brief summary is available below, while more details how each of the binding mechanism works are present in the relevant section:

Source Code

As projects evolve over time, the program code is modified to reflect changes to the requirements and context. Unfortunately, ad-hoc modifications are often done directly on source code, and knowledge about them is stored in the head of the developer. Transferring design decisions into model improves maintenance, while abstract classes (ABCs) serve as a portal between the spec and code and can include some default method bodies and field values.

IDEs

The best friend of a Software Engineer which prompts him with autocompletion suggestions during the coding process which drastically increases the throughput. Also if the developer does not receive access to hints, she immediately recognises something's gone wrong before compilation even takes place. In VSCode, the bindings are JSDoc comments with @typedef tags as well as classes in namespaces which are made available globally.

Compilers

Usually the programming language will come with a compiler which will perform optimisation passes on the code. To do that, it needs to have typing info stored in bindings called headers. However, JavaScript is a dynamic language and its optimisations are speculative in the virtual machine. Nevertheless, it is still possible to statically analyse source code by linking it to a parallel AST with interface info, like Closure Compiler does via externs.

Databases

Persistence is the cornerstone of distributed computing: as applications scale, it becomes impossible to store all data in-memory, so a database vendor has to be chosen. With model-centered approach, an agnostic logical model can be converted into technology-specific schemas at any time, allowing to experiment with different databases. It also lets us generate patches to existing physical models, and produce libraries for each programming language in no time.

Interops

It's not uncommon to deploy programs in different languages to different environments as part of the same stack, e.g., user interfaces written in JS have to communicate with servers in Java or Go. Without IDL, a RESTful API service has to be set up which is tedious and time-consuming. A better option is to perform Remote Procedure Calls (RPCs) using stubs and forget about HTTP piping forever. One binary format for such bindings is known as protobufs.

Generators

The true power of Model-Driven Engineering really lies in the ability to establish software product lines. In the world of advanced Software Engineering, new methods of production are employed such as invasive software composition that relies on generated code to parameterise components and adapters between them. A generator also traverse the AST, but in the design realm we label it Type Object Model Tree (a hierarchical tree data structure similar to DOM).


Although we have identified many targets into which a model can be transformed, it's clear that most of them will work together in tandem: the abstract classes will rely on typedefs for IDE support and headers for compilation, as will auto-generated DBMS libraries that mirror schemas in a given programming language. The protobuf stubs will help to pack data efficiently and ensure fast data transfer that saves on the user's bandwidth. Applied all together, model bindings cover every aspect of software in terms of developer productivity, piping infrastructure and verified correctness. It is the application of model-based principles that leads to holistic, maintainable production systems and really elevates the profession to the status of an Engineering discipline.

section break

Towards Enterprise Engineering

In sum, at Art Deco™, we saw a gap in the market where the need to separate designs from implementations was not addressed. We thus developed a design compiler that works in a fashion similar to standard compilers, but operates on types. We created a technology-neutral XML-based language as an IDL for feeding into an extensible generator capable of transforming interfaces into bindings for multiple targets.

Furthermore, we collected multiple topics in Computer Science including model-driven development, generative programming, component-based architectures and computer-aided engineering under the umbrella of Type Engineering that seeks to help Software Engineers to realise their full potential as designers and not just coders. Typal is its first tangible product, but we will continue to innovate in the same direction.

One of primary aims of Software Engineering in industry is to serve enterprises, which are highly-complex dynamic systems with many independent agents. It is extremely naive that it is possible to organise their activity with plain programming, rather, sophisticated modelling is needed. Any enterprise software is a comprehensive model, but to be put in place and maintained to high standard, it requires its own model.

The interfaces can then be thought of enterprise's components to be utilised in other areas like data engineering, system engineering and business engineering. By lifting our types from code to design, we are able to turn them into valuable assets for reuse within entire departments and streamline inter- and intra-team communication. Typal is the tool that can help with that, not just panacea that adds types to a programming language.

Typedefs

This section demonstrates how easy it actually is to write interfaces using the Typal IDL. Here's how to define a very simple interface with a field and a method:

<types ns="com.example">
  <IInterface>
    <string name="myField">
      A field of the interface.
    </string>

    <method name="perform" async>
      <arg name="data" string>
        Some input to the method.
      </arg>
      <return string>The result of the operation.</return>

      Performs certain computations on the instance.
    </method>

   A simple interface for the fast example purpose.
  </IInterface>
</types>

Figure 1: an interface definition using straight-forward hierarchical XML notation.

First, we will want to generate typedefs to power the user experience in the IDE. One of the biggest advantages of using Typal is the ability to reference interfaces without having to import classes as modules in code (which is sometimes impossible due to circular dependencies). We strongly believe that with dynamic languages like JavaScript, having access to type hints is crucial as the developer himself is doing much of type checking: if the hint didn't appear, it is the first indication of some shortcoming, caught even before the compiler. The generated typedefs code will look like the following:

/** @nocompile */
/** */
var com = {}
com.example = {}

/* @typal-start {typal/pages/index/examples/IInterface.xml}  1a8bd52531a82340d0903aff47409768 */
/**
 * A simple interface for the fast example purpose.
 * @interface com.example.IInterface
 */
com.example.IInterface = class { }
/**
 * A field of the interface.
 */
com.example.IInterface.prototype.myField = /** @type {string} */ (void 0)
/**
 * Performs certain computations on the instance.
 * @param {string} data Some input to the method.
 * @return {!Promise<string>} The result of the operation.
 */
com.example.IInterface.prototype.perform = function(data) {}

/**
 * A concrete class of _IInterface_ instances.
 * @constructor com.example.Interface
 * @implements {com.example.IInterface} A simple interface for the fast example purpose.
 */
com.example.Interface = class extends com.example.IInterface { }
com.example.Interface.prototype.constructor = com.example.Interface

// nss:com.example
/* @typal-end */

Figure 2: the result of typedefs generation. Every interface will have a concrete constructor created for it.

By defining classes as functions and assigning to their prototypes, we implicitly declare them as global types accessible via the namespace. And due to the added @nocompile tag at the top, the file will not participate in compilation and affect the program in any way. To take advantage of these types in source code, all we have to do now is import the typedefs file from the source code using the standard import statement as the following:

import './typedefs'

/**
 * An example use of an interface.
 * @param {com.example.IInterface} iface The interface.
 */
export async function example(iface) {
  console.log(iface.myField)
  await iface.perform()
}

Figure 3: by importing the typedefs file, all types are made available to the global context via the namespace.

Because of the trick of using namespaces that we've discovered after extensive research into pure JSDoc type system, all interfaces will be available globally, so that we can receive the auto-completion hints to make us extremely productive as error-checking is done by the developer during code-time (prior to more rigorous compiler checks).

Figure 4: super-comfortable IDE experience is ensured by auto-completions according to type data.

The screenshot above demonstrates how easy it is to get access to our design infrastructure from within the source code, without importing concrete modules that represent classes, and avoided the circular dependency pitfall. Thus, we have separated concerns of programming and OOP design. With Typal, it is truly possible to enforce the essential "program to the interface" principle established in the industry.

Compiler Externs

Unlike TypeScript, which is a terminal consumer technology in itself that imposes Microsoft's agenda upon the developers, the Google Closure Compiler is a neutral utility in the toolbelt of professional JavaScript engineers. The header files used by the compiler are known as externs, and to prevent violation of the DRY-principle, we will be generating them from interfaces using Typal.

/**
 * @fileoverview
 * @externs
 */

/** @const */
var com = {}
com.example = {}

/* @typal-type {typal/pages/index/examples/IInterface.xml} com.example.IInterface  1a8bd52531a82340d0903aff47409768 */
/** @interface */
com.example.IInterface = function() {}
/** @type {string} */
com.example.IInterface.prototype.myField
/**
 * @param {string} data
 * @return {!Promise<string>}
 */
com.example.IInterface.prototype.perform = function(data) {}

// nss:com.example
/* @typal-end */
/* @typal-type {typal/pages/index/examples/IInterface.xml} com.example.Interface  1a8bd52531a82340d0903aff47409768 */
/**
 * @constructor
 * @implements {com.example.IInterface}
 */
com.example.Interface = function() {}

// nss:com.example
/* @typal-end */

Figure 5: the externs for the compiler follow certain conventions (while the natural language comments can be omitted).

The traditional workflow as used by Google engineers themselves, is to define externs and pass them to the compiler using the --externs flag, however that does not allow to use the type information during coding, because if the externs are imported in code in the same way as we did typedefs above (i.e., import './typedefs'), the compiler will terminate with an error.

Moreover, the JSDoc-standard used by the compiler is not as developer-friendly as VSCode's implementation, for example:

/**
 * @param {function(string, string)} callback
 * @return {undefined}
 */
URLSearchParams.prototype.forEach = function(callback) {};

Figures 6+7: the compiler does not support named arguments in the callback definitions.

The snippet above uses the function(string, string) Closure notation for a callback, and we don't receive hints regarding argument names which encode crucial meaning. On the other hand, with a more extended VSCode syntax, it's possible to do the following:

/**
 * @param {(value:string, key:string) => void} callback
 * @return {undefined}
 */
URLSearchParams.prototype.forEach = function(callback) {}

Figures 8+9: when using VSCode-specific syntax for JSDoc, hints which convey the full meaning of the callback are given.

In this second snippet, the (value:string, key:string) => void construct is used for the callback, yet the compiler does not understand the latter notation (on top of the fact that externs cannot be imported as discussed). Therefore, the only viable option to both provide externs to the compiler, and to receive the appropriate developer experience, is to keep the single source of truth in form of XML interface definitions which are then split into these two independent targets. Hence, Typal is a must-have tool for every Closure user who would like to reconcile the dev-x with the build process.

Compiler Runtypes

At Art Deco Code Ltd, London, we choose to constantly innovate and build science-backed technology of unmatched quality. One of such innovations is the concept of runtypes that we are proud to share with other members of the Closure user group: since we're keeping the interfaces separate from the source code, we need a way to pass that type info to the compiler. The immediate solution would be to store all interfaces in externs, but that

The runtypes is a mechanism similar to externs: it provides the compiler with interface information for type-checking, but unlike externs, allows the compiler to rename fields and methods. The purpose of externs has always been to ensure compatibility between components, however in a system compiled in one go, it is not required to preserve all property names, which might be undesirable as it reveals the intention of the programmer to the outside party studying your organisation's code.

/* @typal-type {typal/pages/index/examples/IInterface.xml} com.example.IInterface  1a8bd52531a82340d0903aff47409768 */
/** @interface */
$com.example.IInterface = function() {}
/** @type {string} */
$com.example.IInterface.prototype.myField
/**
 * @param {string} data
 * @return {!Promise<string>}
 */
$com.example.IInterface.prototype.perform = function(data) {}
/**
 * @suppress {checkTypes}
 * @interface
 * @extends {$com.example.IInterface}
 */
com.example.IInterface

// nss:com.example,$com.example
/* @typal-end */
/* @typal-type {typal/pages/index/examples/IInterface.xml} com.example.Interface  1a8bd52531a82340d0903aff47409768 */
/**
 * @constructor
 * @implements {com.example.IInterface}
 */
$com.example.Interface = __$te_Mixin()
/**
 * @suppress {checkTypes}
 * @constructor
 * @extends {$com.example.Interface}
 */
com.example.Interface

// nss:com.example,$com.example
/* @typal-end */

Figure 10: the runtypes use a shadow $-prefixed namespace to define interfaces and constructors.

The idea is to define all interfaces, constructors and records under a shadow namespace and extend them without using = function() {} RHS so that they don't spill into code. The __$te_Mixin() is used to make sure the compiler does not complain about non-implemented methods by the constructor. The @suppress tag is used to skip the warning about not having the RHS. Runtype files are then passed using the --js flag, allowing the compiler to activate type inference without using externs as headers, which results in greater space-savings due to mangled properties.

To validate the type-checking property of runtypes, let's write an example function that takes an interface and uses its field incorrectly by passing it to a function called test that expects an argument of a different type. We'll also call a non-existing method perform1 to see if the compiler reacts to that "mistake". We will be using the test-function approach throughout the rest of examples on this page.

import '../types/typedefs'

/**
 * @param {!com.example.IInterface} iface
 */
export async function validate(iface) {
  test(iface.myField)
  const res1=await iface.perform()
  test(res1)
  iface.perform1()
}

/**
 * @param {number} n
 */
function test(n) {
  console.log(n)
}

Figure 11: the source to validate correctness of runtypes: a .myField string is passed to a function which expects a number instead.

When running the compiler, it will issue the following warnings, confirming that the correct type verification was done by the type system:

pages/index/chunks/src/index.js:7:7: WARNING - [JSC_TYPE_MISMATCH] actual parameter 1 of test$$module$pages$index$chunks$src$index does not match formal parameter
found   : string
required: number
  7|   test(i1.myField)
            ^^^^^^^^^^

pages/index/chunks/src/index.js:9:7: WARNING - [JSC_TYPE_MISMATCH] actual parameter 1 of test$$module$pages$index$chunks$src$index does not match formal parameter
found   : string
required: number
   9|   test(res1)
             ^^^^

pages/index/chunks/src/index.js:10:5: WARNING - [JSC_INEXISTENT_PROPERTY] Property perform1 never defined on com.example.IInterface
  10|   i1.perform1()
           ^^^^^^^^

0 error(s), 3 warning(s), 97.0% typed

Figure 12: the compiler successfully issued warnings about referencing unknown method and incorrect arg type.

Finally, the table below compares the results of compiling JS source code with externs and runtypes:

(async function(a) {
 console.log(a.myField);
 const b = await a.perform();
 console.log(b);
 a.g();
})({});
(async function(a) {
 console.log(a.g);
 const b = await a.h();
 console.log(b);
 a.i();
})({});

Figure 13: the code compiled with externs (left) and runtypes (right).

As you can see, after compiling JavaScript with runtypes, the .myField property and .perform method were renamed in contrast to externs where they remained the same. Of course, if one were using these API touchpoints from outside the compiled code, she would have to specify them in externs, however if a standalone system is prepared, we can pass type info in runtypes rather than externs, and because all properties are renamed consistently, the program will still work.

In sum, many people use externs as a place where interface definitions are stored, i.e., as headers for the compiler, even in cases where properties can be mangled. Instead, we suggest to use runtypes, that will hide implementation details and result in better space optimisation. Using Typal once again proves to be of ultimate advantage here, as runtypes becomes the 3rd target of IDL transformation.

Working with Classes

You might remember that in addition to interfaces, we generated @constructors that referenced those interfaces in the @implements tag. This is done on purpose so that we can bridge the gap between the source code and design specification: the constructors work as abstract classes that we can extend (via JSDoc) in code and let the compiler establish a relationship between concrete and such abstract classes.

Unfortunately, the developer experience when defining classes is far from perfect - despite having declared interfaces separately, we will not be able to have argument types of methods disambiguated correctly in the source code, even if we place the @implements tag above a class. This is a gross violation of the DRY principle as methods need to be typed anew inside source code.

Figure 14: the IDE does not provide autocompletion hints for arguments as their type info is lost.

In fact, the latest version of VSCode (1.71.2, of 14 September 2022) does not support the @implements JSDoc tag on classes at all:

Figure 15: the newer IDE does not recognise the @implements tag whatsoever (the comments present previously are now lost).

This shortcoming can be mitigated by using a specially-crafted notation that we have come up with to ensure that the argument types and return types on methods are preserved. It would be incredible if the @implements tag just worked out the box by allowing us to access hints without special effort, but for now we have to adapt as follows.

import '../types/typedefs'
import { test } from './test'

/** @constructor @extends {com.example.Interface} */
function Imlplementation() {}
Imlplementation.prototype=/** @type {!com.example.IInterface} */({
  async perform(data) {
    test(data)
    return 123
  },
})

Figure 16: extending an "abstract" constructor com.example.Interface and assigning a cast record to the prototype.

Although the syntax is less sugary than classes, this way we do receive the appropriate dev-experience in the IDE, and the compiler will still perform its type-checking job, ensuring that our software is of high standard of quality, while we enjoy writing our JavaScript with satisfaction.

Figure 17: in contrast to the malfunctioning class before, the IDE now lets us access auto-complete suggestions on an argument's type.

pages/index/chunks/src/class.js:9:9: WARNING - [JSC_TYPE_MISMATCH] actual parameter 1 of test$$module$pages$index$chunks$src$class does not match formal parameter
found   : string
required: number
   9|     test(data)
               ^^^^

pages/index/chunks/src/class.js:10:11: WARNING - [JSC_TYPE_MISMATCH] inconsistent return type
found   : number
required: (IThenable<string>|string)
  10|     return 123
                 ^^^

0 error(s), 2 warning(s), 97.4% typed

Figure 18: the compiler successfully validates the argument's type and method return type from within the cast prototype.

By making a @constructor function that @extends {com.example.Interface}, we created a link between the "virtual" type information about our model compiled from Typal IDE and the tangible source code. However, there might be cases when we'd like to extend a class, in which case we can use an advanced class pattern:

import '../types/typedefs'
import Dep from './dep'
import { test } from './test'

/**
 * @implements {com.example.IInterface}
 * @extends {com.example.Interface}
 */
class Implementation extends Dep {}

/** @constructor @extends {com.example.Interface} */
function _Imlplementation() {}
Object.assign(Implementation.prototype,
  _Imlplementation.prototype=/** @type {!com.example.IInterface} */({
    async perform(data) {
      test(data)
      return 123
    },
  }))

Assigning to a prototype of a class via =, unlike function, is a no-op in strict mode and will raise an error, therefore, we use a technique similar to the one seen previously, but this time we call Object.assign. The compiler will still type-check code, while we can work with types comfortably after casting. Sure, the syntax does look bulky, but it's the only solution that meets both build and dev-x requirements (it also acknowledges the traditional prototypical nature of JS and is great for multiple trait-inheritance impossible with plain classes).

Power Features

Typal IDE supports a number of exciting features that help during the development process, meet the criteria of the modern JavaScript usage patterns, and seek to achieve the maximal developer productivity that matches the unsurpassed excellence of the Closure Compiler.

Arcs

First of all, there is a special kind of argument nodes, known as arcs (argument-record aka arc) that will automatically expand into a record type:

<types ns="com.example">
  <IInterface basic>
    <method name="perform" async>
      <arg name="content" string>
        A standard argument.
      </arg>
      <arc name="opts">
        <string name="path">
          The path to the file.
        </string>
        <number name="multiplier" default="2">
          The multiplier applied to the input.
        </number>
        <bool name="silent" opt>
          Do not print to the console.
        </bool>

        The options for the method.
      </arc>

      Performs certain computations on the instance.
    </method>

   An interface with arcs.
  </IInterface>
</types>

Figure 19: an interface definition where the perform method accepts a record as the second argument.

Inside the arc tag, an inner record can be placed seamlessly without having to define it outside of the class whose method it belongs to. It greatly simplifies the usage pattern when named arguments are passed to routines. There can be many arc arguments to a single method.

/** @nocompile */
/** */
var com = {}
com.example = {}

/* @typal-start {typal/pages/index/examples/IArcs.xml}  2c161d6a6f6a35831868de6b960b79ee */
/**
 * An interface with arcs.
 * @interface com.example.IInterface
 */
com.example.IInterface = class { }
/**
 * Performs certain computations on the instance.
 * @param {string} content A standard argument.
 * @param {!com.example.IInterface.perform.Opts} opts The options for the method.
 * - `path` _string_ The path to the file.
 * - `[multiplier=2]` _number?_ The multiplier applied to the input. Default `2`.
 * - `[silent]` _boolean?_ Do not print to the console.
 * @return {!Promise}
 */
com.example.IInterface.prototype.perform = function(content, opts) {}

/**
 * @typedef {Object} com.example.IInterface.perform.Opts The options for the method.
 * @prop {string} path The path to the file.
 * @prop {number} [multiplier=2] The multiplier applied to the input. Default `2`.
 * @prop {boolean} [silent] Do not print to the console.
 */

// nss:com.example
/* @typal-end */

Figure 20: an additional typedef com.example.IInterface.perform.Opts was created for the arc with minimal effort.

In addition to the auto-generated typedef, all of the record's properties will also be expanded into the description of the method where they are used, so that the user can immediately get a glimpse of documentation of arcs when calling a method:

Figure 21: the IDE will provide full documentation into all individual arc's properties as well as an overall overview.

Recurns

Secondly, in a fashion similar to arcs, there are recurn tags (record-return aka recurn) that indicate that a method is returning a record rather than a plain value:

<types ns="com.example">
  <IInterface basic>
    <method name="perform" async>
      <arg name="content" string>
        A standard argument.
      </arg>
      <recurn>
        <string name="result" opt>
          The result of the operation.
        </string>
        <bool name="success">
          Whether the operation was successful or not.
        </bool>
        <prop name="error" type="!Error" opt>
          An error in the cases when the result wasn't obtained.
        </prop>
        The result tuple.
      </recurn>

      Performs certain computations on the instance.
    </method>

   An interface with a recurn in a method.
  </IInterface>
</types>

Figure 22: an interface definition where the perform method can return a record defined in place.

Inside the recurn tag, any number of properties can be placed, which would otherwise have to be defined outside the interface scope and referenced in the method manually.

/** @nocompile */
/** */
var com = {}
com.example = {}

/* @typal-start {typal/pages/index/examples/IRecurns.xml}  e9437f4894aa8b93d5e6fce83d6c6021 */
/**
 * An interface with a recurn in a method.
 * @interface com.example.IInterface
 */
com.example.IInterface = class { }
/**
 * Performs certain computations on the instance.
 * @param {string} content A standard argument.
 * @return {!Promise<com.example.IInterface.perform.Return>} The result tuple.
 */
com.example.IInterface.prototype.perform = function(content) {}

/**
 * @typedef {Object} com.example.IInterface.perform.Return The result tuple.
 * @prop {boolean} success Whether the operation was successful or not.
 * @prop {string} [result] The result of the operation.
 * @prop {!Error} [error] An error in the cases when the result wasn't obtained.
 */

// nss:com.example
/* @typal-end */

Figure 23: an Object typedef will be created for the recurn, where either result or error is optional depending on the outcome.

Now when we're using the prototype-casting strategy as discussed above and attempt to return an object from a method, the IDE will immediately let us fill in the required properties using auto-completion:

Figure 24: when returning values from a method with a recurn, the IDE will let us pick the properties in a convenient manner.

String Enums

Sometimes it can beneficial to choose a value from a set of predefined ones — these types are known as enums. We acknowledge the fact that Closure has a special @enum type, but in Typal we treat them as strings that can be picked from the list of available values.

<types ns="com.example">
  <IInterface basic>
    <method name="perform" async>
      <arg name="action" type="ACTION">
        The action taken by the user.
      </arg>

      Performs certain computations on the instance.
    </method>

   An interface with enums.
  </IInterface>

  <enum name="ACTION">
    <choice val="complete-purchase">
     The purchase has been completed.
    </choice>
    <choice val="price-calculated">
     The price has been calculated for the user.
    </choice>
  </enum>
</types>

Figure 25: string enums can be defined outside interfaces, which will be improved in future to match arcs and recurns.

The string enums are only recognised by VSCode, whereas for the compiler their will always be a single string without checking whether the value is valid.

/** @nocompile */
/** */
var com = {}
com.example = {}

/* @typal-start {typal/pages/index/examples/IEnums.xml}  00748343813db48d9ad3eceb4ce2d465 */
/**
 * An interface with enums.
 * @interface com.example.IInterface
 */
com.example.IInterface = class { }
/**
 * Performs certain computations on the instance.
 * @param {com.example.ACTION} action The action taken by the user.
 * @return {!Promise}
 */
com.example.IInterface.prototype.perform = function(action) {}

/**
 * @typedef {'complete-purchase'|'price-calculated'} com.example.ACTION
 * Can be either:
 * - _complete-purchase_: the purchase has been completed.
 * - _price-calculated_: the price has been calculated for the user.
 */

// nss:com.example
/* @typal-end */

Figure 26: an enum is defined as a string with its values separated by the pipe | operator in typedefs.

Now when we make a call to the method, the IDE will automatically suggest all available options, so that we can pick the desired one straight away:

Figure 27: at the attempt to call a method that supports enum arguments, the IDE will issue suggestions.

Embed Examples

To streamline the library documentation process even further, we have added the ability to embed examples into the typedefs from the XML itself. This way, there is no need to copy-paste examples by hand which can be a daunting task.

<types ns="com.example">
  <IInterface basic>
    <method name="perform" void>
      <example>doc/jsdoc/perform.js</example>

      Performs certain computations on the instance.
    </method>

   An interface with an example.
  </IInterface>
</types>

Figure 28: to embed an example, use the example XML tag.

The content of the example tag can be a path in the project folder, or relative path. The algorithm is also a clever one as it will rename imports from relative ones to a package name (if importing from the same location as package.json).

/** @nocompile */
/** */
var com = {}
com.example = {}

/* @typal-start {typal/pages/index/examples/IExamples.xml}  800d51a415bdd4df8f6aa3c5d623ee6c */
/**
 * An interface with an example.
 * @interface com.example.IInterface
 */
com.example.IInterface = class { }
/**
 * Performs certain computations on the instance.
 * @example
 * ```js
 * import Implementation from '../'
 *
 * ```
 * First, create a new instance:
 * ```js
 * const iface=new Implementation
 * ```
 * And call the method:
 * ```js
 * const res=iface.perform()
 *
 * console.log('hello world:', res)
 * ```
 */
com.example.IInterface.prototype.perform = function() {}

// nss:com.example
/* @typal-end */

Figure 29: the example-embedding algorithm can automatically detect package names for package consumers.

Figure 30: the algorithm also supports "stacking", allowing to supplement examples with natural language.

In addition to package name detection, it is possible to leave certain parts of examples out, so that the examples can be tested independently without having to reveal those parts that do not convey meaning. The original example file looks like the following:

/* start example */
import Implementation from '../'

/* end example */

export function example() {
  /* start example */
  /// First, create a new instance:
  const iface=new Implementation
  /// And call the method:
  const res=iface.perform()

  console.log('hello world:', res)
  /* end example */
}

Figure 31: start and end example markers support whitespace normalisation and the /// comments that will break up the example.

In future versions, forking examples for execution and automatic embedding of stderr and stdout streams will also be supported.

Demo Version

To be able to try out Typal for the Closure™ Compiler, start by entering an email where we can dispatch the download link to. The free version supports typedefs generation, so that you can validate our claim that it's possible to achieve the perfect developer experience you deserve with the power of JSDoc only! All the power features (arcs, examples, etc) are included!

And once you're ready to progress to compiling your code with externs and runtypes, please come back to purchase the full version from us.

API key will be emailed here.
Please enter your name.

Precompiled Libraries

In order to conceal the implementation details of components that Art Deco™ distributes to clients to protect our intellectual property, we have come up with a novel packaging strategy in such a way that does not require distribution of source code. This is called "precompiled libraries" and is also made easy once the design infrastructure is taken out of source code and placed in IDL. Let us explain how Typal can help please your corporate lawyers further.

Imagine we created a library that supports string transformation with the following API:

<types namespace="eco.artd">
  <function name="snakeCase" string>
    <arg name="string" string>
      The string to transform.
    </arg>
    Converts a string into a `snake_case`, with all lower-case letters and an
    underscores for whitespace.
  </function>

  <function name="kebabCase" string>
    <arg name="string" string>
      The string to transform.
    </arg>
    Converts a string into a `kebab-case`, with all lower-case letters and a
    dash for whitespace.
  </function>
</types>

The implementation is pretty straight-forward.

import '../types'
import { kebabCase } from './kebab-case'
import { snakeCase } from './snake-case'

export { kebabCase, snakeCase }

The next step is to create the library entry point, where we use module.exports to assign the package's components (don't worry if this looks like Node code to you, we just borrow the idea of a global module variable — the code will still be executable in the browser):

import { kebabCase, snakeCase } from './'

module.exports = {
  [1]:kebabCase,
  [2]:snakeCase,
}

In order to preserve the integrity of the module for future version, we use integers (a-la protobuf style) to export the implemented functions. Although we could have used the function names themselves on the module object, and added such property definitions to externs, integers also help save space.

The next step would be to compile the entry point. The result of the compilation is the following:

module.exports={
  [1]:b=>b.replace(
    /(^|.)([A-Z])/g,(d,a,c)=>a+(a?"-":"")+c.toLowerCase(),
  ),
  [2]:b=>b.replace(
    /(^|.)([A-Z])/g,(d,a,c)=>a+(a?"_":"")+c.toLowerCase(),
  ),
}

//# sourceMappingURL=string-util.js.map

Although the output does resemble the source code a little bit, all variables appear mangled and it is really hard to derive the meaning of such "object code" to those who attempt to study the working logic of your algorithms. Our goal is to let package consumers use this precompiled code for their purposes, without distributing the source code. This will make more sense in the context of larger libraries.

The assignment to module.exports is possible because we provided a module definition in the externs file:

/** @var */
var module = {
 exports: {},
}

The next step is to prepare a template file, where we use the import Module line with some service markers understood by Typal.

import Module from './browser' /* compiler fn:../string-util mod renameReport:../module.txt packageName:@artdeco/string-util */

/** @export ./api.js */

and an api.js file that enumerates all functions exported as part of our public API (and includes the licensing info). The import location (../../src) is not important and only helps to access APIs for our our convenience:

import { snakeCase, kebabCase } from '../../src'

/**
@license
@artdeco/string-util (c) by Art Deco (tm) 2022.
Please make sure you have a Commercial License to use this library.
*/

/** @api {eco.artd.snakeCase} */
export { snakeCase }

/** @api {eco.artd.kebabCase} */
export { kebabCase }

The contents of the browser.js file:

import getModule from '../string-util-upd'

/**
@license
@LICENSE                      Warning!

This file links to (or embeds, in which case you will see object code
boundaries comments before and after), proprietary code from the package(s):
*/

const Module = getModule({}, window['DEPACK_REQUIRE'])

export default Module

Now when we run Typal against the template, the following index.js will be generated, that will serve as an entry point for package:

import Module from './browser'

/**
@license
@artdeco/string-util (c) by Art Deco (tm) 2022.
Please make sure you have a Commercial License to use this library.
*/

/** @type {!eco.artd.snakeCase} */
export const snakeCase = Module['2']

/** @type {!eco.artd.kebabCase} */
export const kebabCase = Module['1']

The api.js was combined with the template to receive a file that imports a module from browser.js file and for each of the @api endpoints, exports a property of the Module according to integer indices. The interesting part lies in the way that Typal has also updated the compiled file to receive string-util-upd file imported by the template:

import { isShared, getPackageName } from '../version'

const packageName=getPackageName()

function getEmbeddedModule(exports, require, module = {}, __filename = '', __dirname = '') {
 const fn = new Function('exports, require, module, __filename, __dirname', `
/*! @embed-object-start {@artdeco/string-util} */
module.exports={
  [1]:b=>b.replace(
    /(^|.)([A-Z])/g,(d,a,c)=>a+(a?"-":"")+c.toLowerCase(),
  ),
  [2]:b=>b.replace(
    /(^|.)([A-Z])/g,(d,a,c)=>a+(a?"_":"")+c.toLowerCase(),
  ),
}
/*! @embed-object-end {@artdeco/string-util} */`)
 fn(exports, require, module, __filename, __dirname)
 const m = module['exports']
 if (require) require[packageName] = m
 return m
}

function getSharedModule(exports, req) {
 return req(packageName)
}

const shared = isShared()

export default shared ? getSharedModule : getEmbeddedModule

As can be seen from above, Typal has read the compiled code and wrapped it in a getEmbeddedModule function as a template-literal string. It also added the getSharedModule function that could be used to get the module from our require system instead of embedding its compiled code. The version.js file:

/**
 * @suppress {uselessCode}
 */
export const isShared = () => { try {
  return ARTDECO_STRING_UTIL_COMPILE_SHARED
} catch (err) {
  return false
}}

/**
 * @suppress {uselessCode}
 */
export const getPackageName = () => { try {
  return ARTDECO_STRING_UTIL_PACKAGE_NAME
} catch (err) {
  return '@artdeco/string-util'
}}

The try-catch blocks in the version file allow us to overcome the Closure's limitation where --define's are incompatible with ECMA modules. Using shared packages is an advanced topic that we will not discuss here: we are only interested in embedding precompiled libraries at the moment, but by passing -D=ARTDECO_STRING_UTIL_COMPILE_SHARED during compilation, we could prevent the compiler even from embedding the object code and let our software rely on a package made available as property of window['DEPACK_REQUIRE'].

Now if we package and distribute the generated files and externs appropriately, our clients will be able to import the library APIs in form of precompiled code:

import { kebabCase } from '@artdeco/string-util'

const a = kebabCase('ArtDeco')
console.log(a)

And when the consumer-code is compiled, Closure will not waste time on type-checking and compilation of our module, because the entire package is distributed as a string.

/*

@LICENSE                      Warning!

This file links to (or embeds, in which case you will see object code
boundaries comments before and after), proprietary code from the package(s):
*/
/*

@artdeco/string-util (c) by Art Deco (tm) 2022.
Please make sure you have a Commercial License to use this library.
*/
const f=(0,function(a,b,c={},d="",e=""){(new Function("exports, require, module, __filename, __dirname",`
/*! @embed-object-start {@artdeco/string-util} */
module.exports={
 [1]:b=>b.replace(/(^|.)([A-Z])/g,(d,a,c)=>a+(a?"-":"")+c.toLowerCase()),
 [2]:b=>b.replace(/(^|.)([A-Z])/g,(d,a,c)=>a+(a?"_":"")+c.toLowerCase())
};
/*! @embed-object-end {@artdeco/string-util} */`))(a,b,c,d,e);a=c.exports;b&&(b[""]=a);return a}({},window.DEPACK_REQUIRE)["1"])("ArtDeco");console.log(f);

//# sourceMappingURL=consumer.js.map

The only thing that is left to do, is define the DEPACK_REQUIRE before the package can be used:

function DEPACK_REQUIRE(packageName) {
  var mod = DEPACK_REQUIRE[packageName]
  return mod
}

We are sorry if this all sounds overwhelming at first, however with practice, it all will start making sense. Also, some people might argue that in 2022 onward, we should be using native modules and not experiment with our own modular systems. We believe this is not true, and whereas imports for sure should be used in source code, there's plenty of room to experiment with alternatives like our hybrid solution. Here's a simple diagram that illustrates the above process:

Of course, precompiled libraries do exhibit a few disadvantages: firstly, it is not possible to optimise source code for a specific language version, and tree-shaking will not apply, however both drawbacks can theoretically be overcome: firstly, it's possible to supply multiple versions of object code for different language standards (ES5, ECMA22, etc); and secondly, with a little bit more effort we could could compile each function as a chunk, and wrap each individual method in a fashion similar to above. On the other hand, there's less time that will be spent on compilation, and you can distribute your intellectual property without publishing source code. If you invest in our software, we will be able to develop those ideas further.

Finally, there's only last thing we need to do before publishing the library: to prepare the main.js file. Although we have created a browser.js file which will serve as the entry to the compiler, we still want to make sure that package consumers get the hints for using our functions.

Let's prepare another template:

import * as api from './compile' /* compiler renameReport:./module.txt */
require('../types/typedefs')

/** @export ./browser/api.js */

In here, we reused the api.js file and required a typedefs file where our functions were defined. The template will be transformed into the following:

const { 2: _2, 1: _1 }   = require('./compile')
require('../types/typedefs')

/**
@license
@artdeco/string-util (c) by Art Deco (tm) 2022.
Please make sure you have a Commercial License to use this library.
*/

/**
 * Converts a string into a `snake_case`, with all lower-case letters and an
 * underscores for whitespace.
 * @param {string} string The string to transform.
 * @return {string}
 */
function snakeCase(string) {
  return _2(string)
}

/**
 * Converts a string into a `kebab-case`, with all lower-case letters and a
 * dash for whitespace.
 * @param {string} string The string to transform.
 * @return {string}
 */
function kebabCase(string) {
  return _1(string)
}

module.exports.snakeCase = snakeCase
module.exports.kebabCase = kebabCase

Although the file appears to be in Node's require format, it will actually never be executed in the browser, and will be picked up by the IDE only for dev-x purposes (this is achieved by setting the main field of package.json to the latter file, while pointing the browser field to the browser.js module). Actually, this file can also be run in Node.js, as no Web APIs were used.

Because we used a database of integers to prepare a module, we have completely decoupled the compiled code from its packaging, while the design infrastructure is powered by pure JSDoc that supports absolutely every possible use case (generics, classes, functions). Library packaging is the 4th target to the IDL-driven development that makes the case for Typal.

Depack Packager

At Art Deco Code, we love the Google Closure Compiler, and have built our technical architecture around it. In addition to Typal, we have produced Depack: a packaging and static analysis tool that will scan for all JS files to supply to the compiler, manage the list of externs/runtypes and control additional arguments to pass from one place.

The standard JSON configuration looks like the following:

{
  "output": "pages/index/chunks/library/chunks",
  "browser": true,
  "lib": {
    "snake-case.js": "src/s.js",
    "kebab-case.js": "src/k.js"
  },
  "args": [
   "--formatting","PRETTY_PRINT",
    "--property_renaming_report",
    "pages/index/chunks/library/compile/rename-report.txt"
  ],
  "hide_warnings_for": [
    "pages/index/chunks/library/types/typedefs/api.js"
  ],
  "externs": [
    "pages/index/chunks/library/types/runtypes-externs/ns/eco.artd.js",
    "pages/index/chunks/library/types/runtypes-externs/index.externs.js",
    "pages/index/chunks/library/types/runtypes-externs/node-browser.js"
  ],
  "runtypes1": [
    "types/runtypes-externs/ns/__$te.ns",
    "types/runtypes-externs/ns/$eco.ns",
    "types/runtypes-externs/ns/$eco.artd.ns",
    "types/runtypes-externs/symbols.js",
    "types/runtypes-externs/api.js"
  ]
}

The tool will read the entry files specified in lib and look for ECMA modules imports during static analysis of dependencies. When it encounters a package import, it will also substitute the right entry point. It a way, it works like an old python tool that was used to build the dependency tree, but it does not work with goog.module modules and only supports pure JS imports and exports.

If a package.json of a dependency specifies and externs or runtypes fields, these are read and appropriate measures are taken to ensure to respect them for the correct compilation.

Depack also supports chunking using our own "corking" algorithm (please subscribe to email updates on the left to be notified when we publish an article about it). An example of 3 chunks that will be created using the JSON config from above, and the source code of string-util package from the previous section:

const d = /(^|.)([A-Z])/g;

const g = (b => b.replace(d, (e, a, c) => a + (a ? "-" : "") + c.toLowerCase()))("ArtDeco");
console.log(g);

const f = (b => b.replace(d, (e, a, c) => a + (a ? "_" : "") + c.toLowerCase()))("ArtDeco");
console.log(f);

Figure 32: a shared chunk (top), followed by a kebab-case chunk and a snake-case chunk which were compiled together but split into separate files.

section break

Node.JS Compilation

In fact, Depack was designed as a module bundler for Node.JS projects. It fully supports analysis of Node APIs, and we have included the list of externs for server-side programming.

Not only that, but we have implemented a smart-wrapper for Node.JS chunks that will load chunk dependencies using a dynamic analysis technique that is facilitated by a runtime script that seal the chunks together (because Node.JS does not provide a single execution context between modules known as window in the browser). This strategy allows to lazy-load chunks on demand without any special effort from the developer.

const start = Date.now()
  var $$COMPILER_EVAL=p=>eval(p)
  with(require('./runtime')(module,$$COMPILER_EVAL,{},'base.js')) {
  
const d = /(^|.)([A-Z])/g;

  $$COMPILER_EVAL.inner=function(p){return eval(p)};
  }
  process.env.COMPILER_PROFILE&&console.log('%dms to load %s', Date.now() - start, module.filename)

//# sourceMappingURL=A.js.map
const start = Date.now()
  var $$COMPILER_EVAL=p=>eval(p)
  with(require('./runtime')(module,$$COMPILER_EVAL,{},'A.js','B.js')) {
  
module.exports = {[1]:b => b.replace(d, (e, a, c) => a + (a ? "-" : "") + c.toLowerCase()),};

  $$COMPILER_EVAL.inner=function(p){return eval(p)};
  }
  process.env.COMPILER_PROFILE&&console.log('%dms to load %s', Date.now() - start, module.filename)

//# sourceMappingURL=kebab-case.js.map
const start = Date.now()
  var $$COMPILER_EVAL=p=>eval(p)
  with(require('./runtime')(module,$$COMPILER_EVAL,{},'A.js','B.js')) {
  
module.exports = {[2]:b => b.replace(d, (e, a, c) => a + (a ? "_" : "") + c.toLowerCase()),};

  $$COMPILER_EVAL.inner=function(p){return eval(p)};
  }
  process.env.COMPILER_PROFILE&&console.log('%dms to load %s', Date.now() - start, module.filename)

//# sourceMappingURL=snake-case.js.map

We use eval to dynamically decide which file contains a variable that a chunk depends on (due to the fact that Node files do not share the same variable space). Although there might be a slight performance hit, it is negligible and the advantage of loading only required files outweighs this drawback. If you study the files, you'll see that the second and third chunks simply call b.replace(d,...) while the d variable is not imported using standard require from the first chunk — it will be "automagically" disambiguated using our eval-based path finder in runtime.

Show runtime code

const $$DEPACK_BUILT_INS = {  }
'use strict'
const path = require('path');             const g=path.dirname,k=path.join,m=path.relative;const n={black:30,red:31,green:32,yellow:33,blue:34,magenta:35,cyan:36,white:37,grey:90},p={reset:0,bold:1,i:4,reverse:7,h:8},q=a=>{Array.isArray(a)&&(a=a.join(";"));return`\x1b[${a}m`};function r(a,c){c=[n[c],...Object.keys({}).map(f=>p[f.toLowerCase()])].filter(Boolean);if(!c.length)return a;c=q(c);const e=q(0);return`${c}${a}${e}`}/*

 -= Depack Runtime =-
 Art Deco EULA: Not Open Source License.
 For unlimited free use without warranty
 as part of acquired distributed packages,
 but

 Not for publication on public registries
 in any form, whether distributed on its own
 or as part of a bundled package offering,
 except the package marketplace Ludds.io,
 unless with prior written permission.

 (c) 2022 Art Deco Code Ltd
     London, UK
*/
const t=require("./rename.map"),u=Object.getOwnPropertyNames(global).reduce((a,c)=>{if(c.startsWith("$"))return a;a[c]=!0;return a},{module:!0,require:!0,__dirname:!0,__filename:!0}),v={[Symbol.unscopables]:u},{COMPILER_PROFILE:w}=process.env;function x(a,c,e){var f=y;const d=m(__dirname,c.filename);e={f:a,c:e,rel:d,filename:c.filename,get g(){return a.inner}};f.a[c.filename]=e;f.b[d]=e}
function z(a){var c=y,e=c.a[a];if(!e)return[];var f=e.c;e=[];for(const d of f)f=k(g(a),d),e.push(c.a[f]);return e}class A{constructor(){this.a={};this.b={}}}
class B{constructor(a){const c=z(a.filename),e={},f={};return new Proxy(v,{has(d,b){return["eval","$$COMPILER_EVAL"].includes(b)||b in f?!1:!0},get:(d,b)=>{if(b in d)return d[b];if(b in $$DEPACK_BUILT_INS)return $$DEPACK_BUILT_INS[b];if(b in global)return global[b];if(b in e)return e[b];f[b]=!0;let h;w&&console.error("%s %s needs %s.",r("[\u2a7c]","grey"),r(m("",a.filename),"magenta"),b);d=t[b];let l;d&&(d=y.b[d])&&(l=[d]);if(l)try{h=C(b,l,a.filename)}catch(D){h=C(b,c,a.filename,!0)}else h=C(b,c,
 a.filename,!0);h||(h=global[b]);void 0!==h&&(delete f[b],e[b]=h,w&&console.error("%s Hashing %s.",r("[#]","grey"),r(b,"blue")));return h}})}}function C(a,c,e,f=!1){let d=void 0;for(const b of c){try{d=b.f(a)}catch(h){try{d=b.g(a)}catch(l){if(!f)throw l}d&&w&&console.error("%s %s got %s from inner scope of %s",r("[\u2a7b]","grey"),r(m("",e),"green"),r(a,"cyan"),m("",b.filename))}if(d)break}return d}const y=new A
module.exports=function(a,c,e,...f){for(const d of f)try{require(k(g(a.filename),d))}catch(b){throw console.log(r("Depack Runtime Error","red")),console.error("Could not require dependency chunk %s for compiled module %s",r(d,"blue"),r(m("",a.filename)," yellow")),b}x(c,a,f);return new B(a)}

The runtime code thus works as execution environment that provides a single global memory space for variables that are otherwise placed into file-based modules and don't have means of interacting within a Node process naturally.

We also use Typal once again to prepare a front for the chunked compilation units, and use a @lazy tag on api endpoints to only load them when requested. The IDE will use module.exports AST leafs to power the auto-completions and hints, while Node will pick up properties defined using Object.defineProperties as named exports.

import { kebabCase } from './kebab-case' /* compiler renameReport:./module.txt */
import { snakeCase } from './snake-case' /* compiler renameReport:./module.txt */
require('../types/typedefs')

/** @lazy @api {eco.artd.snakeCase} */
export { snakeCase }

/** @lazy @api {eco.artd.kebabCase} */
export { kebabCase }
require('../types/typedefs')

/**
 * Converts a string into a `snake_case`, with all lower-case letters and an
 * underscores for whitespace.
 * @param {string} string The string to transform.
 * @return {string}
 */
function snakeCase(string) {
  // lazy-loaded(string)
}

/**
 * Converts a string into a `kebab-case`, with all lower-case letters and a
 * dash for whitespace.
 * @param {string} string The string to transform.
 * @return {string}
 */
function kebabCase(string) {
  // lazy-loaded(string)
}

module.exports.snakeCase = snakeCase
module.exports.kebabCase = kebabCase

Object.defineProperties(module.exports, {
  'snakeCase': {
  get: () => require('./snake-case')['2'],
},
  'kebabCase': {
  get: () => require('./kebab-case')['1'],
},
})
The Luddites registry notice ⚠️

Please note, we do not allow publishing packages compiled as chunks that use the runtime on npm, because we're trying to build a paid package registry to discourage developer exploitation and bring about the state of industry where people are rewarded fairly for their programming work. We grant permission to publish trials that lead to a paid version of software, upon request. Internal use within your org is fine.

Front-End Middleware

To make it easier to develop JavaScript code without relying on way too complex bundlers, we'd like to also present another innovation, called front-end middleware: it's a piece of server infrastructure that serves modules right in the browser without their transpilation into standards like Webpack modules.

The transformation of source code is kept to absolute bare minimum: the middleware will only update import paths to node_modules imported by package name, e.g., import '@artdeco/string-util' will become import '/node_modules/@artdeco/string-util/index.js'. This is because the browser can only serve files from absolute paths. We don't patch code in any other way and/or wrap it in a vendor-specific module system.

In addition, front-end middleware also supports hot-reload of functions, classes and variables. Its working logic is based on the fact that ECMA exports in the browser are statically bound throughout source code, i.e., after importing a function in one place, if it's updated in its home file, the function will also be refreshed everywhere else. We use it to our advantage to ad-hoc patch exported functions once their source is updated on the filesystem.

Front-end supports Preact 8 with JSX syntax out of the box and hot-reload of Preact components also. We don't support further hook-based versions, as we have developed our own front-end framework called Web Circuits with transconsistent composition via Subject-Oriented Programming, rather than using functional hooks that break the separation of concerns between design and code. It's highly recommended you enter the mailing list for more info about the upcoming world-class cross-platform component architecture.

Type-Engineer

During the development of Typal, it became apparent that the standard OOP practices are outdated and are riddled with disadvantages such as singular inheritance (rather than multiple inheritance). Gladly, JavaScript is not just a language, but a powerful virtual machine that could be used to emulate more modern versions of component composition by working with objects' prototypes. In other words, the reflective nature of JS means that there is an opportunity to exploit the language's meta-object protocol to "bring your own OOP". In fact, we have come up with just such a runtime library called type-engineer, that is discussed here.

For example, during the development of Typal, we have discovered that there are 2 types of functions that need to be modelled: a function as the first-class citizen of JS, and a method which is a member of a class. Yet, traditional OOP would not allow for clear modelling: a method would be extending a member, just like a field, but a function does not extend anything — there's no way to share certain behaviour between the two: while we can use an ICallable interface to define shared properties (args list, return type, etc) and behaviour (e.g., addArg), we cannot let a Method to inherit from both a Member concrete class and a Callable concrete class. The software reuse is really weak in OOP, unless we start using traits for multiple inheritance.

In our daily work, not only we generate typedefs and externs, but also abstract classes, to bridge the gap between interfaces and implementation. Abstract classes overcome the single-inheritance limitation by supporting a static __implement function that takes any number of arguments, instead of just one extension hook:

import { AbstractMethod } from '../../../types/lux/Member'
import Callable from '../Functions/Callable'
import Member from './'

/** @extends {_typal.Method} */
export default class Method extends AbstractMethod.__implement(
  Member, Callable,
  AbstractMethod.prototype = /** @type {!Method} */ ({
    __$constructor(node) {
      const {
        asIMember:{
          asITypal: { type: { parsed } },
        },
        asICallable: { allArgs, upgrade, bind, addArgument },
      } = this
      const converted = convertArgs(parsed, allArgs)
      for (const arg of converted) {
        if (arg.isThis) bind(arg)
        else if (arg.name == 'new') upgrade(arg)
        else addArgument(arg)
      }
    },
    get isConstructor() {
      const {
        asIDocumentable: { name },
      } = this
      return name == 'constructor'
    },
  }),
) {}

Using the __implement method of the static class, we work around the inappropriate single-inheritance model of JavaScript to update the prototype of the said abstract class with multiple traits (Member, Callable). And because the designs are now provided by Typal and are extracted from source code, we are able to maintain exceptionally high developer experience.

One additional perk of using type-engineer is the fact that all methods with be automatically bound to the instance upon access (known as bound destructuring), so you can forget about having to prefix method calls with this. Abstract classes also supplied us with asIMember|asICallable getters that cast the instance to the needed type.

import { AbstractFunction } from '../../../types/lux/Functions'
import Documentable from '../Documentable'
import Modula from '../Modules/Modula'
import { Typal } from '../Types'
import Callable from './callable'

/** @extends {_typal.Function} */
export default class Function extends AbstractFunction.__implement(
  Modula, Documentable, Typal, Callable,
  AbstractFunction.prototype = /** @type {!Function} */ ({
    get isFunction() {
      return true
    },
    printName(opts) {
    },
  }),
) {}

The function is different from a method as it can be treated as a module, just like a class (a method cannot be a module as it belongs to a class only), so we add a Module trait to the list of implementations. Type-Engineer is a runtime that allows to compose behaviour by using atomic units known as traits, that provide much better opportunities for code reuse and modelling strategies. It is literally impossible to model Function/Method dichotomy effectively without the use of traits. Our daily programming experience has been fully transformed thanks to the use of multiple inheritance via traits. Can you think of any scenario where traits would be beneficial?

Finally, you might have noticed the use of a special __$constructor method in the Method class — all such methods on all traits that make up a target class will actually be executed one by one, instead of only the last one taking precedence and overriding the rest.

import { AbstractCallable } from '../../../types/lux/Functions'
import Generic from '../Generics'

/** @extends {_typal.Callable} */
export default class Callable extends AbstractCallable.__implement(
  Generic,
  AbstractCallable.prototype = /** @type {!Callable} */ ({
    __$constructor(node, cb) {
      const { attributes: {
        'async':async,'ret':ret,'returns':returns,
        'opt-return':optReturn, ...props
      }, children } = node
      const {
        asICallable:asICallable,
        asICallable: {
          declareException,
        },
      } = this
      if (async) asICallable.async = true

      let retType = getPropType({ ...props, 'type': ret }) || ''

      if (async && retType) retType = `!Promise<${retType}>`
      else if (async) retType = '!Promise'

      if(optReturn) {
        retType=`void|${!async?'':`!Promise<void>|`}${retType}`
      }

      if (returns) asICallable.returns = returns

      const exceptions = nodeSelector(children, 'exception')
      for (const exception of exceptions) {
        const { content: description, attributes: {
          'type': type,
        } } = exception
        declareException({
          description:description,
          type:type,
        })
      }
    },
    addArgument(arg) {
    },
    declareException(exc) {
    },
    bind(arg) {
    },
    upgrade(arg) {
    },
    getArgsList() {
    },
  }),
) {}

For instance, the Callable trait's __$constructor (above) is concerned with reading the async and returns properties of an XML node, as well as throws nodes from the children, while the Method trait's __$constructor will perform its own job of adjusting the method to the arguments if they represent this: or new: keywords. Under the traditional OOP model, the super keyword would have to be called from inside the constructor, but now we have many supers:

class Function {
  constructor(node) {
    super.callable(node)
    super.generic(node)
    super.modula(node)
    // function's constructor
  }
}

When methods with the same name from a variety of traits are combined into a single one and the centralised control is relinquished, it is called Subject-Oriented Programming (multiple subjects i.e. traits reside within the scope of the single object). Such system architecture is really the next-generation software composition model that you can already take advantage of today with type-engineer and Typal.

Pricing

Products

Thank you for your interest in Typal. We hope we could convey the innovative nature of our solutions the existing problems when working with the Closure Compiler and JavaScript programming at large. Please find the price tags listed before. Each level includes all features from the previous ones. We used zero Open Source to bring these tools to you — why not support a young software company that solves your pain points with some real Computer Science?

Demo

free
  • Typedefs generation
  • Arcs & recurns
  • String enums
  • Embed examples
Download ⬇️

Typal

$124.50
  • Externs generation
  • Runtypes generation
  • Overcome circular dependencies
  • Program to interface not implementation
Buy 💳

Typal + Tools

$249
  • Precompiled libraries
  • Depack packager with chunking
  • Node.JS compilation
  • Frontend middleware (includes hot reload)
Buy 💳

Type-Engineer

$499
  • Multiple inheritance (traits)
  • Bound Destructuring
  • Subject-Oriented Programming
  • Aspect-Oriented Programming
  • Plugin Architecture
  • Database Modelling
Contact ✉️

TYNG

$999
  • Programmatic APIs
  • Create JSX-based DSLs
Contact ✉️

Questions And Answers

Q: Can I buy Typal first, and purchase upgrades to more advanced products later?

A: Yes, you can. You will pay $124.50 first to receive the typal binary essential for Closure development, and another $124.50 for the tools.

Q: Why do I need to contact you about purchasing Type-Engineer?

A: At the moment, we are in the process of securing patents for technical implementation of SOP and AOP. We can only offer tools to those individuals and companies that are able to sign non-disclosure agreement to become early adopters to this generative technology.

Q: Do the prices include support?

A: An e-mail support with answers within 2 days (usually a couple of hours on a UK working day) is included in each plan.

Q: Are these perpetual licenses?

A: The licenses are perpetual but do not include major version bumps. You can renew your license to prolong the support for the next year, but Typal and its tools are really to be considered stable, professional and bug-free solutions.

Q: How can I contact Art Deco about any queries regarding Typal software purchase or download?

A: You can e-mail us at typal@artdeco.support. You can also contact us on Keybase encrypted chat.

Q: How many people can use a single license?

A: A single license is for one person only. If you have 5 people on your team, you'll need 5 licenses.

Q: What happens when I make a purchase?

A: We will use an email address that you have entered to email an API key to a private registry. You'll need to update .npmrc file in your project directory that points to our registry and place the key in there.

Q: What is TYNG?

A: TYNG is Typal with rich programmatic APIs so that you can come up with your own generators for any programming language using the same XML-based IDL. It is an ultimate piping tool that you can utilise to build any kind of utils for your development team. Usually, only a few licenses are required here for architects, as developers don't need to access to TYNG APIs on daily basis.