OpenStack Stein PTG - TC Report
I spent all day Friday, except for one nova session, in the TC room. I’ll admit I wasn’t able to completely absorb all discussions and outcomes, but these are the discussions I was able to summarize.
Note, if the picture in this post looks familiar, it’s because you saw it while ordering a beer at Station 26, located right behind the conference hotel. The brewery operates out of an old fire station, hence the name. They show support for first responders, firefighters, law enforcement, military, and emergency medical services by displaying department patches throughout their establishment. Besides that, their beer is delicious.
The morning started off discussing project health. This initiative is relatively new and came out of the Vancouver summit. The purpose is to open lines of communication between project leads and members of the TC. It also helps the TC keep a pulse on overall health across OpenStack project teams. The discussion focused on feedback, determining how useful it was, and the ways it could be improved.
Several TC members reported varying levels of investment in the initiative, ranging from an hour to several hours. Responses from PTLs varied from community goal status to contributor burnout. The TC decided to refine the phrasing used when reaching out to projects hoping that it clarifies the purpose, reduces time spent collecting feedback, and makes it easier for PTLs to formulate accurate responses. Action items included amending the PTL guide to include a statement about open communication with the TC and sending welcome emails to new PTLs with a similar message.
The usefulness of Help Wanted lists surfaced a few times during this discussion. Several people in the room voiced concerns that the lists were not driving contributions as effectively as we'd initially hoped. No direct action items came from this as far as I could tell, but this is a topic for another day.
We spent the remainder of the morning discussing ways we can include contributors in other regions, specifically the Asia-Pacific region. Not only do different time zones and language barriers present obstacles in communication, but finding common tooling is tough. Most APAC developers struggle with connecting to IRC, which can have legal ramifications depending on location and jurisdiction. The ask was to see if participants would be receptive to a non-IRC-based application to facilitate better communication, specifically WeChat, which is a standard method of communication in that part of the world. Several people in the room made it clear that officially titling a chat room as "OpenStack Technical Committee" would be a non-starter if there wasn't unanimous support for the idea. Another concern was that having a TC-official room might eventually be empty as TC members rotate, thus resulting in a negative experience for the audience we're trying to reach.
The OpenStack Foundation does have formal WeChat groups for OpenStack discussions, and a few people were open to joining as a way to bridge the gap. It helped to have a couple of APAC contributors participating in the discussion, too. They were able to share a perspective that only a few other people in the room have experienced first-hand.
Ultimately, I think everyone agreed that fragmenting communication would be a negative side-effect of doing something like this. Conversely, using WeChat as a way to direct APAC contributors to formal mailing list communication could be very useful in building our contributor base and improving project health.
Howard sent a note to the mailing list after the session, continuing the discussion with a specific focus on asking TC candidates for their opinions.
Evolving Service Architecture & Dependency Management
After lunch, I stepped out to attend a nova session about unified limits. When I returned to the TC room, they were in the middle of discussing service dependencies and architectures.
OpenStack has a rich environment full of projects and services, some of which aren't under OpenStack governance but provide excellent value for developers and operators. On the contrary, there is much duplication across OpenStack services systemic of hesitation to add dependencies. In particular, service dependencies that raise the bar for operators. A great example of this duplication is the amount of user secret or security-specific code for storing sensitive data across services when Barbican was developed to solve that issue. Another good example is the usage of etcd, which was formally accepted as a base service shortly after the Boston summit in 2017. How do we allow developers the flexibility to solve problems using base services without continually frustrating operators because of changing architectural dependencies?
Luckily, there were some operators in the room that were happy to share their perspective. More-often-than-not, the initial reaction operators have when told they need to deploy yet another service is, no. Developers either continue to push the discussion or decide to fix the problem another way. The operators in the room made it clear that justification was the next logical step in that conversation. It's not that operators oppose architectural decisions made by developers, but the reason behind it needs to be explicit. Informing operators that a dependency for secure user secret storage probably isn't going to result in as much yelling and screaming as you might think. Ultimately, developers need to build services in ways that make sense with the tools available to them, and they need to justify why specific dependencies are required. This concise clarification is imperative for operators, deployers, and packagers.
In my opinion, explanations like this are a natural fit for the constellation work in OpenStack, especially since deployers and operators would consume constellations to deploy OpenStack for a particular use-case. I didn't raise this during the meeting, and I'm unsure if others feel the same way. I might try and bring this up in a future office hours session.
Long-Term Community Goals
Community goals fall within the domain of the TC. Naturally, so do long-running community goals. Some points raised in this discussion weren't specific to long-running goals, but community goals in general.
As a community, we started deciding on community-wide initiatives during the Ocata development cycle. Community goals are useful, but they are contentious for multiple reasons. Since they usually affect many projects, resources are always a bottleneck. They are also subject to the priorities of a particular project. Long-running goals are difficult to track, especially if it's a considerable non-trivial amount of work across 30+ projects.
While those things affect the success rate of community-wide goals, we made some progress on making it easier to wrangle long-running initiatives. First and foremost, breaking complicated goals into more digestible sub-goals was a requirement. Some previous goals that were relatively trivial are good examples that even straight-forward code changes can take the entire cycle to propagate across the OpenStack ecosystem. That said, breaking a goal into smaller pieces makes pushing change through our community easier, especially significant change. However, this introduces another problem, which is making the vision for multiple goals clear. Often there are only a few people who understand the end game. We need to leverage the domain-knowledge of those experts to document how all the pieces fit together. A document like this disseminates the knowledge, making it easier for people to chip in effectively and understand the approach. At the very least, it helps projects get ahead of changes and incorporate them into their roadmap early.
There is a patch up for review to clarify what this means for goal definitions. I'd like to try this process with the granular RBAC work that we've been developing over the last year. We already have a project-specific document describing the overall direction in our specification repository. At the very least, going through this process might help other people understand how we can make OpenStack services more consumable to end-users and less painful for deployers to maintain.