Docusign SDKs: Our Story, Part II

The Journey from Swagger to SDK


Welcome to a second post of the series where we talk about how we build, maintain and ship SDKs at Docusign. If you haven’t seen the first post, make sure you read it first. It shouldn’t take you more than three minutes! ⚡

Most Docusign APIs are built on top of .NET Framework, so a good option for us, when we began developing our SDKs, was to build a common Swagger Generator library that makes use of reflection to go through all types and annotations in the code and spit out a valid Swagger file. 🔥

As a result of the core platform release, a new version of the Swagger file is generated and shared in an internal repo along with some metadata, as shown here:

Figure 1


The file is then imported into an internally built web app called Tir in order to validate, ingest, and decorate it with the help of Swagger extension fields.This is what decorating the Swagger file looks like:

Figure 2


After that, the API programmer writers review and edit the documentation included in the Swagger file, and, using the same tool, make any needed edits or additions. Here is an example:

Figure 3


In parallel, the Developer Center team looks at the list of changes in the spec file, as well as the list of feature requests and bugs fixes requested by the developers community, and decides on the target release version for each SDK. This also depends on the API name (eSignature, Rooms..) and API version number (v2, v2.1,…). The following screenshot, for instance, shows a list of added API methods:

Figure 4


The Tir interface makes it easy to select a programming language, an API, and an API version, and suggests a target SDK release version, as shown here:

Figure 5


It’s also possible to override the version number (for instance 3.7.0-BETA instead of 3.7.0) and add more information to the release notes. Tir then submits the job to a Node.js queue where a worker eventually picks it up and spawns a child process to run Swagger Codegen in order to turn the JSON Swagger file into working source code. 🤖

Then the worker pushes the newly generated code to an internal git repo, creates a pull request against the main branch, and tags three members of the Developer Center team for code review, as illustrated in the following screenshot. We use the GitHub load balancing algorithm to make sure everyone on the team gets a fair amount of code review. ⚖️

Figure 6

If needed, a member of the team updates the changelog with additional releases notes and applies any extra bug fixes that couldn’t be applied to Swagger Codegen templates.

As a side effect of creating this pull request, a whole set of unit tests and end-to-end tests is run to ensure that we don’t break customers who are using the SDKs. If there is such a breaking change, we document it in the release notes, update the test cases and changelog, and bump the SDK major versions.  As you can see in this image, pull requests that pass required checks are automatically merged. ✅

Figure 7

Next, the SDK is pushed out to package managers. For instance, this image shows the Java SDK being pushed to the Bintray Gradle repository:

Figure 8

In Tir, using the [push] button, an engineer pushes the code to GitHub.

Figure 9

The release branch is pushed, a pull request is created, and CI tests get triggered. ☕

Figure 10

A member of the Developer Center team reviews the PR and approves it if it looks good. ✅

Figure 11

Finally, in Tir, an engineer creates a GitHub release tag.

Figure 12

As a result, Tir copies the release notes and creates a release tag in the SDK’s GitHub repo! 🚀 🎉🙌

Figure 13

This completes the journey of an SDK.

Next time I'll explain in detail our versioning strategy, branching strategy, and release notes conventions. Subsequently, in the closing post of this series, I will share some future enhancements, touch on edge cases of this release process, and explain why we deliberately decided to keep some tasks manual. 👋

Additional resources


Majid Mallis
Majid Mallis
Lead Software Engineer
Related Topics