Posts

documentation

At the upcoming APIStrat conference in Portland, Taylor Barnett will explore various documentation design principles and discuss best practices.

Taylor Barnett, a Community Engineer at Keen IO, says practice and constant iteration are key to writing good documentation.  At the upcoming API Strategy & Practice Conference 2017, Oct. 31 -Nov. 2 in Portland, OR, Barnett will explain the different types of docs and describe some best practices.

In her talk — Things I Wish People Told Me About Writing Docs — Barnett will look at how people consume documentation and discuss tools and tactics to enable other team members to write documentation.  Barnett explains more in this edited interview.

The Linux Foundation: What led you to this talk? Have you encountered projects with bad documentation?

Taylor Barnett: For the last year, my teammate, Maggie Jan, and I have been leading work to improve the developer content and documentation experience at Keen IO. It’s no secret that developers love excellent documentation, but many API companies aren’t always equipped with the resources to make that happen. As a result of this, we all come across a lot of bad documentation when you are trying to use developer tools and APIs.

The Linux Foundation: Often, there is a team of documentation writers and there are developers who wrote that piece of software; both are experts in their own fields, but they need a lot of collaboration to create usable docs. How can that collaboration be improved?

Barnett: In large companies, this can definitely be true, although in many companies documentation is still owned by various teams. The need for more collaboration still applies, though. One way to improve collaboration is bringing docs into the product development process early on. If you wait until everything is done and going to be released soon, people writing documentation are going to feel left out of the process and like an afterthought. If people working on the product development collaborate early on, not only does the product become better, but so does the documentation. People who are writing documentation usually spend some time figuring out the API or tool they are writing about, so they only get better when they can work with the people doing product development early on. Also, they can give great feedback from a user’s perspective much earlier in the process.

Another way to improve collaboration is to bring more people into the documentation review process. We do most of our documentation reviews in GitHub. It’s great to not only have someone in the role of an editor review it but also people from the Engineering or Product teams. It increases the number of eyes on the docs and helps make them better.

The Linux Foundation: How should developers approach documentation?

Barnett: Most developers are pretty familiar with the idea of Test Driven Development (TDD), but how familiar are they with Documentation Driven Development (DDD)? The flow for DDD is:

  1. Write or update documentation,
  2. Get feedback on that documentation,
  3. Write a failing test according to that documentation (TDD),
  4. Write code to pass the failing test,
  5. Repeat.

It can be an excellent way for developers to save a lot of time and prevent spending too much time on poorly designed features. As Isaac Schlueter, co-founder of npm, says about Documentation Driven Development, writing clear prose is an “effective way to increase productivity by reducing both the frequency and cost of mistakes.” Our brains can only hold so much information at once. In computer terms, our working memory size is pretty small. Writing down some of the information we are thinking about is a way to “off-load significant chunks of thought with hardly any data-loss,” while allowing us to think slower and more carefully.

For example: At Keen IO, we recently split our JavaScript library into three different modules. This decision was inspired by the documentation we were maintaining. We had tried to streamline the docs, but there was just too much to cover in an attention-constrained world. Many important details and features were hidden in the noise. For example, if all of the documentation was written sooner, we may have made this decision sooner.

Also, as a developer who is writing docs myself, constant iteration and practice are important. Your first version of the docs aren’t going to be great, but with focusing on trying to write clear prose, they will get better with time. Also, having another person who is not familiar with the product and can step through the documentation to review it is essential.

The Linux Foundation: If developers are writing documentation for other developers, how can they really think as the users?

Barnett: I used to think that developers are the best people to write docs for other developers because they are one of them. While I still believe this is partially true, some developers also assume a lot of knowledge. If it has been a while since a developer has done something, the “curse of knowledge” can exist. The more you know, the more you forget what it was like before. That’s why I like to talk about empathic documentation.

You need to empathize with the user on the other end. Don’t assume they know how to do something and give resources to fill in the steps that might seem “easy” to you. Also, hearing that something is “easy” or “simple” when something is not working on the user’s’ end is the worst feeling. It makes your users doubt themselves, feel frustrated, and a bunch of other negative emotions. Always try to remember you need to be empathetic!

The Linux Foundation: What’s the importance of tools in creating documentation?

Barnett: Very important! Earlier I mentioned using GitHub for reviews. I also would recommend having some continuous integration testing in place for your documentation site if you aren’t using a service like ReadMe or Apiary to make sure you don’t break it. A related topic is, do you build your own thing or use a service? Tools can be helpful, but they might not always be the best fit. You have to find a balance based on your current resources. Lastly, I would recommend checking out Anne Gentle’s book, Docs Like Code. She brings up tools a lot in the book.

The Linux Foundation: Who should attend your session?

Barnett: Everyone! Just kidding (kind of). If you are in any role that is developer facing like developer relations, evangelists, advocates, marketers, etc., if you are on a Product team for a developer focused product or platform, or if you are a developer or engineer who wants to write better docs.

The Linux Foundation: What is the main takeaway from your talk?

Barnett: Anyone can write docs, but with some practice, iteration, and working on different documentation writing skills anyone can write better docs.

Learn more in Taylor Barnett’s talk at the APIStrat conference coming up Oct. 31 – Nov. 2 in Portland, Oregon.

APIs

Learn tricks, shortcuts, and key lessons learned in creating a Developer Experience team, at APIStrat.

Many companies that provide an API also include SDKs. At SendGrid, such SDKs send several billions of emails monthly through SendGrid’s Web API. Recently, SendGrid re-built their seven open source SDKs (Python, PHP, C#, Ruby, Node.js, Java, and Go) to support 233 API endpoints, a process which I’ll describe in my upcoming talk at APIStrat in Portland.

Fortunately, when we started this undertaking, Matt Bernier had just launched our Developer Experience team, covering our open source documentation and libraries. I joined the team as the first Developer Experience Engineer, with a charter to manage the open source libraries in order to ensure a fast and painless integration with every API SendGrid produces.

Our first task on the Developer Engineering side was to update all of the core SendGrid SDKs, across all seven programming languages, to support the newly released third version of the SendGrid Web API and its hundreds of endpoints. At the time, our SDKs only supported the email sending endpoint for version 2 of the API, so this was a major task for one person. Based on our velocity, we calculated that it would take about 8 years to hand code every single endpoint into each library.

This effort involved automated integration test creation and execution with a Swagger/OAI powered mock API server, documentation, code, examples, CLAs, backlogs, and sending out swag. Along the way, we also gained some insights on what should not be automated — like HTTP clients.

In my talk at APIStrat, I am going to share some tricks, automations, shortcuts, and key lessons that I learned on our journey to creating a Developer Experience team:

  • We will walk through what we automated and why, including how we leveraged OpenAPI and StopLight.io to automate SDK documentation, code, examples, and tests.
  • Then we’ll dive into how we used CLA-Assistant.io to automate CLA signing and management along with Kotis’ API to automate sending and managing swag for our contributors.
  • We’ll explore how these changes were received by our community, how we adapted to their feedback and prioritized with the RICE framework.

If you’re interested in attending, please take a moment to register and sign up for my talk. I hope to see you there!

API

Learn the basics of using REST APIs at the upcoming APIStrat conference.

APIs are becoming a very popular and are a must-know for every type of developer. But, what is an API? API stands for Application Programming Interface. It is a way to get one software application to talk to another software application. In this article, I’ll go over the basics of what they are and why to use them.

Nom Nom Nom! I happened to be snacking on chips while trying to think of a name for my REST API talk coming up at APIStrat in Portland. Similarly, the act of consuming or using a REST API means to eat it all up. In context, it means to eat it, swallow it, and digest it — leaving any others in the pile exposed. Sounds yummy, right?

It seems that every application out there is hungry for an API. Let’s look at Yelp for example. Yelp by itself won’t have the functionality you’d expect. In order to search nearby restaurants or locations, it needs to use an API for a map. It uses the Google API. With that, you can locate nearest places and get directions to the place. APIs allow you to integrate one tool into another tool to give it more functionality. Without the ability to make these types of integrations, you can say goodbye to a majority of all the apps out there that you use!

So why are APIs so important? Most companies today have several different software applications they need to use, including sales, accounting, CRM, a project management system, etc. To have the software all work together is increasingly important for financial reasons, which is also making work processes flow more easily. Companies can also create their own tools using other APIs to enhance their own software, making their customers happier and giving them the tools they need.

API Basics

Back in 2000, the very first API came from eBay. Since then, they have increased exponentially. In 2016, more than 50 million API requests have been made, and there are 30,000 available APIs out there. From 2015 to 2016, the number has doubled in growth from 15,000 to 30,000 APIs!

In my talk, I will be covering API basics, how to make API requests, how APIs are made, and much more.  I will show you how you can use POSTMAN to test making REST API calls, so that you will leave with the skills to make REST calls on any API. This talk is designed for any audience level. If you are brand new to programming, that’s fine. If you are an experienced programmer that currently uses APIs but want to go back into the basics to understand the breakdown of how APIs work, then that is fine, too!

If you want to learn more, be sure to check out my other talk at APIStrat:  “Chatbots are the Future: Let’s Build One!” In this talk, I will go over how to build a working Chatbot using the Cisco Spark API, which is a collaboration API for chat (messages), calling, and video. You don’t need to install or download anything to prepare. I will cover everything in the presentation, and it is designed for everyone to follow along. I guarantee you will have a working chatbot by the end of the presentation.

You can learn more at the upcoming APIStrat conference

Improve the efficiency of your software development team with the RICE framework. Learn more at the upcoming APIStrat conference in Portland, Oregon.

The Developer Experience team at SendGrid is a small, but mighty force of two. We attempt to tackle every problem that we can get our hands on. This often means that some items get left behind.  At the outset, we surveyed everything that was going on in our open source libraries and we quickly realized that we needed to find a way to prioritize what we were going to work on. Luckily, our team lives, organizationally, on the Product Management team, and we had just received a gentle nudge and training on the RICE prioritization framework.

On our company blog, I wrote an article about how employing this framework, using a spreadsheet, helped us double our velocity as a team within the first sprint. Our development velocity doubled because the most impactful things for the time spent are not always the biggest things, but the biggest things tend to attract the most attention due to their size.

What is RICE?

RICE as an acronym stands for Reach x Impact x Confidence, all divided by Effort. This calculation allows you to get a score that weighs the following. Some of the definitions we use are a slight departure from Intercom’s original version, but this has been very effective for us!

The calculation:

Reach * Impact * Confidence

————————————–

               Effort

This gives us a score for every item in our list. Then, we sort our list in descending order by score. We realized, once we had a sorted list, that we accidentally made a Kanban backlog. We worked from the top of the list, keeping work in progress (WIP) to as much of a minimum as possible. WIP can be tough with open source, because we often have 20-30 issues waiting for a community member response. These items sit at the top of our backlog, and we look into them at the start of every day in the hope that we can clear them out of our WIP category.

Lessons Learned

Reach – The number of customers this will affect

One thing we learned about using RICE is making sure that we use consistent numbers for each of the variables in the calculation. It was very tempting for us, an email company, to use the “number of emails” sent as the Reach parameter. This worked until we started trying to evaluate tasks that didn’t have anything to do with our v3/mail/send endpoint. We eventually settled on number of customers using this library for this purpose”, calculating API user count and mail user count for the Reach.

Impact – A measure of the effect completing this project will have

It is easy to assume that every single item is a high or massive priority. It looks nice, gives you an ego boost, and totally messes up everything in your ranking system. Be honest with yourself about what is on your list. If things don’t really seem to be in the right place in your list (more on this below) then look at impact, because it’s probably artificially high, especially in context of the items around it in the list.

Confidence – How confident are we that we can sit down and complete this task today

We use a text-based selection from the list: None, Minimal, Low, Medium, High, and “with my eyes closed”.  These each translate into specific numbers for the calculation.

Effort – The number of story points will this take to complete

We use story points because this approach allows us to figure out the calendar length of a task rather than the aggregate of specific amounts of time spent on the task. To be more specific, this is the difference between “It’s going to take 3 hours” and “It will only take 3 hours of work, but it won’t be finished until Friday.” This is an easy trap to fall into, because new projects are exciting and we want to jump in and knock them out. That doesn’t mean that we can actually get a project that “will only take a couple days” done within the same month we started it. Life happens, your velocity calculation accounts for that — especially if you use an average over the last year (get out yer agile pitchforks!).

Letting the backlog win

We have learned the hard way with projects we really want to work on right now, that they are not always the right project at the moment. It is important to have confidence in the calculation and take the assumption that it is correct. That is, until you are looking at the list and realize that “Hey, item 15 really should be item 5. What’s going on here?” Look at your list in context of the other items, does the order feel correct? If not, why not?

We ended up using RICE as the baseline calculation for everything we do, but it is not the end-all. We added in calculation modifiers for company priority, due dates, and status — because something the executives have on the company roadmap, that has to be done in Q3, should be worked on right away. And, because we are using Kanban, the status of an item is important. Once you start something, it should stay at the top of the list until it is done, or you decide it is no longer necessary to complete it. Getting WIP completed, rather than backed up, is a good way to see the impact of your work and get a sense of accomplishment for yourself and your development team.

Matt Bernier is the Developer Experience Product Manager at SendGrid, where he spends most of his time digging into customer feedback in order to provide a World Class Experience for Developers with SendGrid.

Learn more in Matt Bernier’s talk — How We Doubled the Velocity of Our Developer Experience Team — at the APIStrat conference coming up Oct. 31 to Nov. 2 in Portland, Oregon.