The Product Idea boards have gotten an update to better integrate them within our Product team's idea cycle! However this update does have a few unique behaviors, if you have any questions about them check out our FAQ.

AACP Product Ideas

Share your AACP product ideas, including Designer Cloud, Intelligence Suite and more - we're listening!

Currently, when pivoting a specific field into multiple columns, all other fields you want present in the resulting table must be individually added to "row labels".

First off - It is very time consuming when you have a lot of columns to add.

Secondly - When new columns are added in the source data, these new fields are not automatically included. When this happens we need to:

  1. re-sample the data,
  2. make sure the new column is present,
  3. manually add it to the list in the row labels.


When using an automation tool such as Trifacta I would expect that my flow can deal with new columns being added without having to go and fix my flow every time. Adding an option to add "All other fields" or being able to select the Fields to exclude would make this process much smoother and it would ensure that our flow is future proof.

Currently is scheduling allowed only on Instance level.

The request is to be able to allow scheduling for particular user, instead for all instance users.

In order to monitor the status of the plan that has been running several different flows inside, in my case it is around 300, I send the HTTP request to Datadog to display the result of failed and success on a dashboard. The problem is, DATADOG understands only epoch timestamp and not the datetime value. Right now we cannot convert the timestamp into epoch. I was thinking of approaching this problem in the following ways:

1) Having a pre-request script

2) Creating dynamic parameters in Dataprep instead of using a fixed value, that can be used further in the HTTP request body

3) This is just the turnaround - Creating a table that stores the flow name and timestamp in it, and we are supposed to use this table in a plan every time we are running a flow. But this is not the right way. It will work but it is waste of time as we will end up creating separate tables like this one for each flow.

As far as I know, the current error logs for a failed Trifacta job do not tell the user which recipe, which recipe step, and on what data the error was thrown.

This lack of basic information on the Trifacta level makes it hard for a normal user to debug Trifacta jobs. Typically, I will have to work backwards in the flow, attaching and running an output for each recipe until I can find the culprit recipe causing the issues. Then, I will have to disable steps one by one until I find the step that causes the recipe to fail. This is time and resource consuming.

As for the offending data triggering the problem, I still don't know how to get that, and that's actually crucially important for an ongoing issue we're having with Spark execution.

Therefore, I suggest that improved and simplified error logging would be very helpful in fixing problems in the future. Thank you for your consideration.

We need the full steps on GCP Dataprep and GCP to allow us to run scheduled jobs as a true service account (not a user account) and not require authentication of the owning user account (which is timing out in the night due to 16 hour policy for users we have)

So when we schedule a job we should be able to choose a true technical account to "run the job as".

We have an issue as our AD users are synchronised from on premise and a 16 hour timeout policy is applied to each user so any job scheduled with a user will fail after 16 hours and job will be disabled . There is no way for us at our company to sync ad users to GCP IAM without this policy from on premise so we need to be able to run with Service Account.

Hi team,

We would need a page where a user can handle all the email notifications they are receiving from all the flows (success and failure).

Thank you

We can migrate flows from one environment to other environment using Trifacta APIs.

Export and Import the flow from source to target.

Rename the flow.

Share flow with appropriate user according to environment.

Change the input and output of the flow.