visual header dotted

No-Code Data Pipelines: New Connectors, Greater Flexibility, and More

The latest nuvo No-Code Data Pipelines release is out!

visual left dotted
visual right dotted

We are excited to share the expanded functionality of nuvo’s No-Code Data Pipelines. With this latest release, we increase the flexibility and efficiency in designing and executing pipelines. Furthermore, we have focused on enabling you to quickly and effortlessly test our data transformation capabilities.

Your feedback has been invaluable in guiding our development efforts. We would love to hear your thoughts on this latest release! So let’s dive into it.

Smart S3 and (S)FTP Input Connectors

We have substantially expanded the capabilities of our existing S3 and (S)FTP input connectors to enable state-of-the-art data importing from these kinds of connectors.

Previously, users were only able to specify the exact file name for importing data. We have now introduced the ability to select a folder from which the input data can be fetched for schedule-based pipelines. Moreover, users have the option to define specific inclusion and exclusion tags that determine whether a file in the folder should be processed or not.

These new features provide users with improved efficiency and flexibility when running pipelines. They can now choose to process all files within a designated folder, selectively process specific ones, or process only the newest file(s) in a particular folder.

Additionally, we have introduced additional options for handling the input files. Users can now decide whether the input file should remain in its original format, be removed from the folder after execution, or be renamed. The renaming process is dynamic and allows users to incorporate static and execution-dependent elements. For example, they can include variables such as the timestamp of the pipeline execution ({timestamp}) or the original file name ({fileName}) as part of the renamed file.

With these advancements, our users can enjoy a seamless and efficient data processing experience, with the ability to manage input sources more effectively and customize file handling to suit their unique workflows.

Email Input and Output Connectors

We have added a complete new connector format for our pipelines, email connector. 

Email input connector: Many of our customers receive data from their customers and partners via email. To streamline the data import process, this email with input data attached can now directly trigger an event-based pipeline execution. We support currently the following data formats, csv, xls(s), xml, and json.

Email output connector: The user can define a designated email address where the cleaned output data is sent as an attachement after the pipeline execution. We currently support structured data formats for the output connector, csv, xls(x), and xlm. 

One of the major benefits of email connectors is their straightforward setup process. Unlike HTTP(S) and (S)FTP connectors, which often require technical configurations and credentials, email connectors offer a quick and easy way to test new pipelines and data transformations.

Example Output Connectors

We have introduced three new sample output connectors to correspond to the existing sample input connectors and target data models. With just a few clicks and no need for credentials, you can easily set up a demo pipeline. This allows you to swiftly test nuvo's data transformation and column mapping capabilities and witness how they can assist you in solving your specific use case.

De-nesting of Nested JSON Input Data

We are pleased to announce that the familiar "allowNestedData" feature from our Data Importer SDK is now available for our Data Pipelines.

This feature simplifies the process of transforming nested data into a two-dimensional structure suitable for import. By denesting .json files based on our pre-defined rules, arrays are replaced with underscores ("_") and objects with periods (".") for easier display in a 2D table. 

You don’t need to invest engineering time and resources to reformat nested data but directly start the import process to the target system.

And there you have it! For more details, have a look at our release notes. We believe that these data pipeline advancements will greatly assist you in automating the ingestion of external data and seamlessly applying the required data transformations.

Should you have any feedback or require assistance in setting up your pipeline, please don't hesitate to schedule a meeting with our team or reach out to us directly at We are here to support you every step of the way.

nuvo blogs

Explore more related blogs

30-minute video call

Book a chat with the nuvo team

white visualwhite visual