Flowbuilder
Web
B2B
Healthcare
UX

Increase in satisfaction scores
Reduction in third party tool usage
Reduction in operating costs of internal users
ETL stands for “Extract, Transform, and Load.”

The process of ETL plays a key role in data integration strategies. ETL allows businesses to gather data from multiple sources and consolidate it into a single, centralized location. ETL also makes it possible for different types of data to work together.
Flowbuilder works with a suite of Enterprise Products which provide field insights and data analytics for clients in healthcare. Flowbuilder is one of the resident ETL tools that creates metrics, prediction models, CEI scores etc. which get used in other products.
This case study details out the redesign and enhancement of an ETL (Extract Transform Load) tool in the healthcare domain.
This project was initiated by a combination of business decisions and customer feedback.
I worked along side the Sr. Director of Product Management and the development team, I understood requirements, conducted User Research and Testing, delivered wireframes and visual design.
I also worked collaboratively with a design team which includes a Sr. UX Designer and Lead UX Designer. We worked together to provide each other feedback and maintain consistency.
Make the product self serve and drive down operating costs.
Right now Flowbuilder is being used by internal employees who work off client requirements. The main objective is have the client’s own employees be able to use the product independently.
Flowbuilder is dependent on a lot of external products to complete its workflows. In order to be shipped to a client, it must be enhanced enough to be a complete product with excellent user experience.
The internal employees can then be redirected to other projects, improving cost and efficiency in operations.
This is how Flowbuilder looked at the time I joined the project. Here a user would link blocks called nodes in their workspace. These nodes perform transformative functions on the data that passes through them.
Persona
John Kubler
Business Technology Associate
John works on the Biotech Client Project in Flowbuilder. He attends team meetings to discuss Biotech’s requirements and then they work together to create flows.
Set up nodes and chains based off the client’s requirements.
Test each pipeline beforehand to make sure they execute correctly without errors.
Perform troubleshooting on failed executions and apply appropriate fixes.
Setting up nodes can be a daunting task. There is not enough space to see the entire SQL configuration to pinpoint mistakes.Test each pipeline beforehand to make sure they execute correctly without errors.
The tool lack a lot of convenience features such as the ability to undo, save changes, shortcuts, cut/copy etc. This makes simple tasks rather tedious and time consuming.
Users working on the same pipeline often have difficulty tracking what other team members have done. This often leads to communication gaps and frustration.
The node space can get really cluttered, often nodes get hidden behind other nodes. this often leads to
I have Bi-weekly discussions with PM and Dev team to understand requirements and making sure they align with the business goal
I would then compile a list of tasks to be performed in that module
Wires are reviewed with the dev team to make sure all requirements are met
Wires are then reviewed with the design team to make sure it’s following the design system
Changes are made based off the feedback on the wires
Wires are annotated for development
Since Flowbuilder is a part of a suite of other products; the design patterns must be consistent. Users must have a consistent experience across all products.
User task and actions that may have dire consequences must be flagged. Slowing down a user by adding friction and review steps can help prevent or reduce mistakes made using the product. Users must experience a sense of seriousness when completing these tasks.
Look for opportunities to provide alternate simpler interfaces for non-technical users. i.e. the user should be able to perform an action without knowledge of SQL. A non-technical user should experience a sense of ease when presented alternatives to coding.
Since many of Flowbuilder’s users were internal Users, I could approach them at any time for more information about their pain points and testing. I also had access to the research done by other researchers before I joined.
Here are some of the questions I needed answered:
How are the users using the tool right now?
How do the users work in a team?
What do they find useful in the existing tool and designs?
What level of technical know how is needed before you are able to use the product?
I preferred having one on one interviews with users. I would sit with them with their laptops and would observe as they completed their tasks and talked about their pain points. I would ask the following questions:
Top insights
Users have a pretty good grasp on SQL and would usually work on a single node for a long time before moving on to the next one.
Users often use 3rd party applications like IntelliJ IDEA to write SQL based on client requirements, and then paste it into the node configuration. This is because the user has not been provided adequate space to view/review and test the SQL.
Sometimes nodes with seemingly no purpose can appear in the workspace with no way to trace why it was placed there, because of this It can get really confusing when working with other users.
The node space can get super cluttered, either as a result of dud nodes or really complex requirements.
Before I start designing, I needed the understand how big Flowbuilder would be after adding the other features.
This would give me a sense of the hierarchy and what navigation pattern is best suited for this. I had multiple PM meetings to create a singular IA (Information Architecture) chart, to map out future requirements and position them with respect to what user role would be using it.
The Information architecture is built around a “Spine”. This “Spine” is basically a path of drill-downs from “Organizations” to “Mappings” this is what allows the user to travel deeper into the tool. Each level of drill-down on the spine is meant for a specific type of User role. Based on your user role, the user would land on that part of the spine.
Since we needed a means to allow the user to navigate between “Page siblings” and due to the possibility of new features being added in the future, we needed a way to create a navigation that was Scalable. We created a navigation inspired by JIRA, since our users use that to log and track bugs at work. This would help the user learn how to use the navigation faster due to familiarity. We incorporated a first level side navigation and a second level menu to display “Page siblings”

The screen above depicts the “Flow view”, this is the core of Flowbuilder. Here the user can drag and drop nodes and create a node chain. This chain is built by internal users who receive requirements from ZS’s clients. The chain is later executed and the transformed data is then used in the other products.
The products in the product suite are complex and intertwined; to understand one, you need to understand them all. As one of the first steps, I needed to conduct workshops with dev and design teams to document and map out these products.
Maintain consistency across Products: Since Flowbuilder exists in a product suite, the users must have the same experience while using it, and to do that we must use the same design patterns.
While I was working on Flowbuilder in Axure, a decision was made to revamp the design system and styling and move to Figma. Flowbuilder was one of the first teams to move to Figma; this meant I needed create the groundwork for a design system for other products to follow in order to keep consistency within our suite of products.
As a part of the pattern library initiative, the verso team defined breadcrumbs, truncation, typography, page layouts, responsive behavior and basically anything not defined in the Design System yet.

This page acts as a dashboard for all users of Flowbuilder, it provides high level information on Executions, Remote Agents, Top Alerts and Pipelines.
This page is incredibly useful for Admin users who need to keep track of Remote Agents and issues with Flow executions. Remote Agents that are kept running too long are a huge expense for a company, this combined with "Top alerts" help Admin users resolve these issues immediately.

This is the review state for the flow screen. One of the main changes from the move from axure wireframes is that we removed the ability to rapidly drag and drop nodes in the workspace; instead we maintained a view only state where users can only review the configurations for each of the nodes.
By adding a view only state for the workspace, we reduce the likelihood of a user accidentally make changes in the sensitive configuration. This is especially good from a self serve perspective since any mistakes made from the client side would have us accountable for it. This also fits better from a user’s mental model, a user would generally work on a single node for a long time before moving on to another node in that chain.

The Workflow requires the user to follow a specific set of steps before adding the node in the system. The user needs to:
Select Node type
Enter a name and a description
Connect a node to other nodes
Configure the node
Review the information before moving on.
Following are the points of value:
By creating a methodical process, we focus the user’s attention on the task at hand. A wizard requires all the steps to be complete before the node can be added, this ensures that the user is clear on the requirements before starting the node creation process
This also prevents node clutter. In the previous iterations of Flowbuilder, users would often drag nodes into the workspace and then forget about it; leaving a lot of dud nodes cluttering the workspace. A wizard workflow discourages that as it won’t let you add a node without filling it’s required fields
The "Node details" step requires the user to add a name and description to the node. Previously nodes added would have a default name that changes incrementally (Node_1, Node_2 etc.). This is useful for the entire team who work together on the same Flow. Proper descriptions and names ensures the entire team understands the work that they have done.

SQL nodes allow the user add transformations using SQL. Here the user can use variables that have been created elsewhere to use within the SQL. The user can start typing and it would auto suggest code or variables, or the user could drag and drop variables into the SQL box from the library on the left.
Following are the points of value:
The previous version of Flowbuilder had a small SQL box embedded in the right side of the Node Properties. This box used to be too small for the user to efficiently work in. So users used to create the SQL in another application and then paste it in here. By giving a dedicated space for adding SQL we are moving closer to making Flowbuilder a self contained application.
The external application could not work with variables created within Flowbuilder. By providing a library, the user is able to use common variables created by the team.

The node configuration step changes based on what node is selected on step 1. This is an example of a “Filter Node”. This is a transformation node just like SQL nodes. This helps filter out Data headers from the raw data fed into it.
This is an example of creating an interface in place of simply a SQL box, this allows non- technical users to complete tasks without needing knowledge of SQL, this is useful for the Self-Serve direction as it may be unclear what types of users exist at Client location.

This allows the user to set up a Data Connection to a repository of raw data to use in the Node chains in Flow View.
This is an example of a feature that used to exist in another application, but now had been brought under a single application as part of Self Serve Initiative

These provide processing power for execution of Node Chains.
This is an example of a feature that used to exist in another application, but now had been brought under a single application as part of Self Serve Initiative.

Each Pipeline contains multiple node chains. Depending on the client, there could be over 400 pipelines.
To manage such a dense data listing. We implemented a system of tagging that could be applied to the pipelines during creation.
The user can then use the content finder to easily filter out pipelines by various categories. This is helpful for browsing through large amounts of dense data.
