Getting Started with Tavio
Expand
The Mechanics of Expansion
3 min
expanding your integration portfolio does not require rebuilding from the ground up by leveraging the platform's modular architecture, you can execute a sort of a clone swap remap strategy that retains your proven business logic while adapting connectivity for new endpoints step 1 clone the solution to a new environment the first step is to establish a clean workspace for the new integration variation rather than cluttering your existing development environment with similarly named workflows (e g , candidate export greenhouse, candidate export ukg), best practices dictate creating a dedicated development environment for the new target system (e g , ukg dev) once the new environment is established, you duplicate your original, production hardened workflows into this new space by importing the previous solution's bundle from the warehouse this approach allows you to maintain identical, succinct workflow naming conventions across different platform environments, ensuring consistency for your implementation and support teams without namespace conflicts step 2 swapping connectors with your logic cloned, the next step is to replace the endpoint connectivity because tavio separates connectivity from orchestration, you can simply remove the smart connector for the original system (e g greenhouse ) and drag in the smart connector for the new system (e g ukg ready ) this simple change is powerful because smart connectors manage the specific complexities of their respective apis—handling authentication, session management, pagination, and relationship traversals automatically the surrounding workflow logic—such as loops, error handling branches, and notification triggers—can often remain untouched because it relies on the platform's standardized behavior, not the idiosyncrasies of any specific api step 3 remapping with clouddata the final step is to bridge the gap between the new connector and your preserved logic using the clouddata transformation layer your core workflow logic operates on a normalized data structure; when you switch systems, the incoming data structure changes (e g candidate id becomes id, or email becomes contact info email) you do not need to rewrite the workflow to accommodate these field changes instead, you simply update the data map configuration within the clouddata node by updating the transformation script to map the new system's schema to your established canonical format, in many cases, the rest of your workflow continues to function as if nothing has changed, allowing you to deploy the new solution rapidly