From Monolith to Platform: a story of tech, strategy, and change management

Testing and defining architecture standards early pays dividends down the line.

Josh Dormont
8 min readJan 14, 2020
Photo by Dan Gold on Unsplash

When we developed the first production-ready app on the servers we had just rented from our IT department, it was built on a hodgepodge tech stack: MS SQL Server, Express, Angular 1.1, NodeJS. It was essentially a MEAN stack but using old parts from a scrap bin. To be fair to our team, this was intentional — we needed to get the thing up and running and into users’ hands by the end of the year to create our first feedback cycle and demonstrate our ability to deploy user-friendly working software quickly.

The app sat on the server along others that were built primarily using the LAMP stack, as well as some vanilla JS and PHP apps. There wasn’t anything wrong with this approach given where we were with our digital strategy (the one we didn’t really have), but as the first phase of testing ended we had new considerations to grapple with: a) scaleable architecture, b) user coherence, c) data standards, and d) rapid deployment.

And so began our quest to define a platform strategy and architecture solution that could grow with us.

At the start of our journey, there were essentially three questions we needed to answer:

  1. What do we begin working on? What problem are we working to solve?
  2. What standards do we need and how do we implement them?
  3. How do we organize ourselves to do the work?

There are frameworks aplenty to answer these questions and begin the work. The beauty of a framework is that it can be flexible to use in different contexts and team cultures. A challenge of using them is that when surrounded by competing views of what to use and who should be involved, the choice becomes paralyzing and meetings are scheduled to plan meetings to plan meetings.

So, rather than pledging allegiance to SAFe or Spotify or a Tribe Called Quest, we chose to grab a little bit of this and that to begin our journey. This is the story of the twists, turns, stumbles, roses, and thorns that we found along the way.

What do we begin working on?

I use the word “begin” with intention. We knew at the early stages of building our products that our most of our ideas about how to best meet user needs would turn out wrong. The sooner we could test those ideas, pivot to stronger ones, and grow those that worked, the better off everyone would be. On paper and to each other this is how we told ourselves we would pursue a new generation of applications. And if we were a startup-or even a group within a larger company that had fewer constraints-maybe those ideals could have been realized.

We shopped around a few ideas that had come from conversations we had with teachers, principals, and internal stakeholders. But almost all of them landed wrong for different reasons — they were too ambitious or not ambitious enough. They were too far outside our locus of control or required partnership with different divisions. The data were new or not new enough. Looking back, all of these “no’s” helped us learn an important lesson: falling in love with a problem is hard when the problem is as deep as addressing educational equity.

Not too hot. Not too cold. A just right problem.

Where we landed had a mix of a few critical elements that I think are especially important for teams working in large, complex organizations.

  • Choose something that adds new value (don’t start going for replacing legacy systems)
  • Choose something that is designed for a single and well defined audience (too many edge cases makes building an MVP nearly impossible)
  • Define the ‘what’ at an outcome level or a problem to solve rather than a notion of a product. Even introducing the idea of a name can be dangerous at this stage.
  • Find a problem whose solution would significantly help both the “business” and user. This is a key element of Service Design — a concept I’ll come back to in a later post.
  • Find a problem that is worth addressing but won’t force stakeholders into a corner and defend their territory.

In the end, we chose a problem that was timely, critical for student success, targeted at a single user group (student programming staff), and had buy-in from the team closest to the relevant policy and data. We asked, “How might we ensure every student has access to equitable course offerings?” It was hard work landing on a problem at the intersection of the elements listed above, but without it we would have been no where.

What standards do we need and how do we implement them?

Going into the project we had few standards to speak of and knew we needed to change that. There were four buckets we needed to work on:

  1. Research and Design,
  2. Data models and APIs,
  3. Front end architecture, and
  4. Product Management.

It’s extraordinarily difficult to do any of this work in abstract, so we used the ‘what’ from above to begin building a product that would put all four pieces in motion. This was a good thing. The not-good-thing we did was not defining specific goals for each that we could work incrementally toward. Put another way, you need to define goals not just for thing you’re creating but also for the way you’re creating it. Prioritization becomes impossible when you’re simultaneously scoping out features while also trying to plan a design system.

A solution? Set process goals and product goals with equal importance.

Avoiding the swamp of murky goals

One of the key tenants of continuous improvement is the notion of the improvement kata. It’s the flow of 1) setting a goal, 2) assessing the current condition, and 3) running experiments to improve a specific condition that is a next step. There is great guidance on how to do this in Lean Enterprise, but for me the key takeaways were:

  • Define measurable (SMART) goals for each priority or standard,
  • Assess the current state of each priority area with a wide range of stakeholders so there’s a shared understanding of the strengths and challenges of the status quo, and
  • Identify a specific experiment to test that can be implemented within 2–3 weeks.

What this doesn’t look like is setting a goal like ‘Build a Design System’ (we may have done that. Twice.) and then not identifying small first pieces to work toward. I fully recognize it’s easy to say this in the abstract — sometimes step two (assessing the current state) can take a long time in a bureaucracy. It’s likely a messy, convoluted system. But the key isn’t to use this as a crutch for a less disciplined approach. Rather, it stresses the importance of being even more diligent about defining clear, small experiments that move the work forward.

This has two benefits: momentum building and risk reduction.

It’s likely that you’re doing this work with some, but tenuous cover from leadership. Demonstrating tangible results through fast learning and new processes is one of the best ways to convince skeptics of the value of Agile/Lean methods. It also lessens the risk you’ll go too far without running into strong resistance. So, work small with clear goals, create things with explicit learning value, and work out loud. Bonus points if that ‘out loud’ extends beyond your organization’s walls.

How do we organize ourselves?

Let’s be Agile! We already theoretically were. But like most teams ‘implementing Agile’ we were doing a lot of the Agile habits (Daily Scrum, Retrospectives, Jira, Backlog prioritization) without either the main Agile gains or underlying goals (faster releases, continuous improvement, continuous delivery, streamlined documentation and governance).

What started as a motley group of a few developers, designer, and a project manager ballooned into a 15 person ‘team.’ There were many signals that we could have picked up on that we were headed for some trouble. Anti-patterns were all around us (too much work in progress, missing estimates, not frequent enough commits, etc). As we move forward, there are a few key lessons we’ve learned:

  1. Keep the team small. This is pretty basic as the research is quite clear on this one, but it’s still easy to do wrong in an enterprise setting. Between trying to appease multiple stakeholders and find the ‘capacity’ from people that likely aren’t committed to the project full time, the inclination is to include more. Don’t. Just don’t. Having fewer people will also help you more likely identify needs and define scope for a release (whether an early stage MVP or enhancement) that can be designed, built, and tested in a relatively short period of time.
  2. Define roles more clearly than you would ever think necessary. The difference between when we worked with a loose construction of what skills everyone had vs when we had very clear roles for tech lead vs back end developer were stark. This is the place to, where possible, add estimates of when in a project someone’s time might spike (e.g. is a developer going to be using all of their committed time at the beginning when you’re still in the research phase?). This will help relieve some of the inevitable stress that the team member’s manager is feeling.
  3. Clarify who is making what decisions. Make a distinction between the product and release goals and the decisions the product team is empowered to make. Keep the team involved with open and transparent decisions (e.g. avoid blind handoffs) but avoid committee fatigue with clear roles up front. There is a clear risk here that someone who might be working on more than one project now has double the meetings (2x stand-ups, planning, and retrospectives). There are a different mitigation strategies for this but however you resolve it, don’t just give in. Consider ways to consolidate or isolate (e.g. restrict a person to working on more than one sprint team at a time).
  4. Set collaboration goals and each sprint work to make progress on them. Want to reduce lag time from research to deployment or expedite user feedback processes or streamline QA? Good. Make these goals explicit and each sprint commit to doing 1 thing better to reach them. At the next retrospective bring back the goal, the action, and share what worked and didn’t. Make a new goal. This is essentially the Plan/Do/Study/Act (PDSA) cycle make popular by Demming and ‘Improvement Science.’

Closing up

Think big, start small. You may have a vision for a robust design library matched to component assets, users lined up to test prototypes, APIs, and a suite of unit tests as part of an automated deployment cycle. Fantastic — you’ve done your reading and have a way to communicate with your team and peers about where you’re headed. But doing all of this is especially hard within a government or enterprise setting, and so it makes sense to start with the things that your team and leadership are going to be most excited about and can dedicate capacity to. We ended up doing more with our API repo than we expected. This was a win, but a harder win to sell to our leadership who weren’t technologists.

(Disclaimer: This piece, like others in the series, is written by me as an individual. It does not represent the views or perspectives of the NYCDOE.)

--

--