Pages

Saturday, April 27, 2019

Einstein Analytics: Handling null values

This blog is not related to Null Handling in Measures. As per this article, null values in Dimensions are not completely supported in Einstein Analytics, however, we often deal with null values in many scenarios.

To prevent data quality issues, Einstein Analytics will disregard any fields in Salesforce (or columns in external data) that are entirely null.

Grouping with Null
Date field by default is null, when you use it in a chart for grouping, null value will not be shown. To overcome this, if you pull data from Salesforce, use defaultValue (e.g. 1900-01-01) to override in sfdcDigest.



Dimension field by default is null, when you use it in a chart for grouping, null value will not be shown. To overcome this, if you pull data from Salesforce, use defaultValue (e.g. NA, " is not needed) to override in sfdcDigest.

Measure field by default will become 0 for null value. We cannot use Measure field for grouping, but we can use it for filter.


Filter null records in SAQL
q = load "DTC_Opportunity_SAMPLE";
q = foreach q generate 'Account_Owner' as 'Account_Owner', 'Product_Name' as 'Product_Name', (case when 'Product_Name' is null then "" else 'Product_Name' end) as 'PM';
q = filter q by PM == "";
q = order q by 'Account_Owner';
q = limit q 10;

Count Not null records in SAQL
q = load "Lead";
q = foreach q generate (case when 'SFDC_Lead_ID__c' is null then "" else 'SFDC_Lead_ID__c' end) as 'SFDC_Lead_ID__c';
q = filter q by 'SFDC_Lead_ID__c' != "";
q = group q by all;
q = foreach q generate count() as 'count';

To group null as NA
q = load "DTC_Opportunity_SAMPLE";
q = foreach q generate coalesce('Product_Name',"NA") as 'Product_Name', count() as count ;
q = group q by 'Product_Name';
q = foreach q generate 'Product_Name' as 'Product_Name', count() as count;
q = order q by count desc;


Augment transformation cannot find the parent will cause Null


SAQL Expression in above computeExpression case when 'Acq.Industry__c' is null then "Parent not available" else 'Acq.Industry__c' end

Data in Salesforce

Result in Einstein Analytics

Notes from above screenshot:
  • 1st row for Parent Industry = NA, because we set the default value to NA in sfdcDigest.
  • 1st and 2nd row for Account Source = null, because we set the default value to null in sfdcDigest.
  • 1st row for Employees and 2nd row for Annual Revenue = 0, Einstein Analytics auto set 0 for Measure field with null value.
  • 3rd and 4th row for Parent Industry = Parent not available, this is because there is parent lookup value, we use computeExpression to set the value, this is different with 1st row, where the parent lookup value is available, but Industry for that parent record is null.


Reference:


Friday, April 26, 2019

Salesforce: Account Hierarchy columns & Recently Viewed columns

Can we customize the Account Hierarchy columns?

Classic - NO
This is the article and this is the idea.

Lightning - YES
Here is the article and here the steps:
  • From Setup, at the top of the page, select Object Manager.
  • In Account, click Hierarchy Columns and then click New button if never created, or Edit link to edit the columns.
  • You can include up to 15 columns.


When you create Hierarchy Columns, system will auto create a new list view called "Org_Account_Hierarchy" and added to the Accounts list view menu, you can rename it, but not to change the sharing setting, deleting this item resets the columns to the defaults.

By default -- no hierarchy column setup, account hierarchies display the same columns as the Recently Viewed Accounts standard list view. However, the list view columns don’t change when you customize the hierarchy columns.


Recently Viewed List
In Classic, when we click a tab, such as Accounts tab, by default it will show "Recent Accounts" with columns defined in Search Layouts - Tab



While switching to Lightning, click Accounts tab will bring open the "Recently Viewed" list view (if pinned list view has not changed), list view columns in "Recently Viewed" is defined in Search Layouts - Search Result. This view isn’t deletable, change the visibility, or rename.

However, you will found another list view with a similar name but include object name in Lightning, e.g. Recently Viewed Accounts, however, we cannot configure the columns for this view (until Summer '19 release), and unable to delete, change the visibility, or rename it. So the easier is just to ignore it.




Reference:


Wednesday, April 24, 2019

Einstein Analytics: SAQL in computeExpression with samples

computeExpression is one of the most powerful features in Dataflow in computeExpression, you can "add" fields without having to change the source data.



1. Get field value - TEXT
'CreatedBy.Role.Name'
 as the field name contains a dot, use ' before and after the field name

2. Set a text value - TEXT
"RoleName"
 always use " before and after the value for text

3. Get current date - DATE
now()

4. Get the first 18 characters - TEXT
substr('RECORD_ID', 1, 18)
 using substr() function, field name use enclosed by '

5. Get the first 18 characters with len() - TEXT
substr(UltimateParentPath, len(UltimateParentPath)-17,18)
using len() function

6. Combine text - TEXT
'CreatedDate_Year' + "-" + 'CreatedDate_Month' + "-" + 'CreatedDate_Day'

7. Combine text in case - TEXT
case when isDuplicate is null then 'Name' else 'Name' + " (" +'Username'+ ")" end
 using case function

8. Using multiple when in Case and compare Text - TEXT
case when 'Opportunity.Sales_Type__c' == "A" then "Type A" 
     when 'Opportunity.Sales_Type__c' == "B" then "Type B"  
     else "Type C" 
end

9. Check is Null - TEXT
case when 'Opportunity.Name' is null then "Yes" else "No" end
using is null keyword

10. Check is Not Null - TEXT
case when 'OptySplit.SplitOwnerId' is not null then 'OptySplit.SplitOwnerId' else 'OwnerId' end
using is not null keyword

11. Use && and ! as alternative - TEXT
case when 'Owner.Name' is null && !('Queue.Name' is null) then "Queue" 
     when !('Owner.Name' is null) then "User" 
     else "N/A" 
end

12. Use && and ! as alternative to get field value - TEXT
case when 'Owner.Name' is null && !('Queue.Name' is null) then 'Queue.Name' 
     when !('Owner.Name' is null) then 'Owner.Name' 
     else "N/A" 
end

13. Simple bucketing - TEXT
case 
  when Value == 0 then "[1] 0"
  when Value <= 1000000 then "[2] 0-1M"
  when Value <= 25000000 then "[3] 1M-25M"
  when Value <= 100000000 then "[4] 25M-100M"
  else "[5] 100+M"
end

14. Get numeric value from field - NUMERIC
case 
  when Type_Data is not null and Type__c = "Type A" then Annual_Data
  when Type_Value is not null and Type__c = "Type B" then Annual_Value
end
else keyword is not a must in case, ' is not a must if the field does not contain dot

15. Check Neglected Case - TEXT
case when DaysSinceLastActivity >= 60 then "true" else "false" end

16. Check Is Lost - TEXT
case when 'IsClosed' == "true" && 'IsWon' == "false" then "Yes" else "No" end
there is NO BOOLEAN type in Einstein Analytics, so always enclosed wth "

17. Using IN - TEXT
case when 'Opportunity.StageName' in ["Stage 1", "Stage 2", "Stage 3", "Stage 4"] then "true" else "false" end
 using in[] function

18. Check is Overdue - TEXT
case when ('IsClosed' == "false") && (daysBetween(toDate(substr('ActivityDate', 1, 10), "yyyy-MM-dd"), now()) > 0) then "true" else "false" end
using daysBetween() function

19. Get Days Overdue - NUMERIC
case when 'IsOverdue' == "true" then daysBetween(toDate(substr('ActivityDate', 1, 10), "yyyy-MM-dd"), now()) else 0 end

20. Check is between 2-30 days - TEXT
(case when date('TIMESTAMP_DERIVED_Year', 'TIMESTAMP_DERIVED_Month', 'TIMESTAMP_DERIVED_Day') in ["30 days ago".."2 days ago"] then "yes" else "no" end)
using date() funtion

21. Check is Yesterday - TEXT
(case when date('TIMESTAMP_DERIVED_Year', 'TIMESTAMP_DERIVED_Month', 'TIMESTAMP_DERIVED_Day') in ["1 day ago".."current day"] then "yes" else "no" end)

22. Check is Past Due - TEXT
case when IsClosed == "false" && (toDate(CloseDate_sec_epoch) < now()) then "true" else "false" end
using toDate() and _sec_epoch field

23. Duration in Second - NUMERIC
date_diff("second", toDate(ValidFromDate_sec_epoch), now())
using date_diff() function

24. Check Is Closed - TEXT
case when daysBetween(toDate(ActivityDate_sec_epoch), now()) >= 0 then "true" else "false" end

25. Get days since last activity - NUMERIC
case    
   when LastActivityDate is null then daysBetween(toDate(LastModifiedDate_sec_epoch), now())   
   when LastModifiedDate > LastActivityDate then daysBetween(toDate(LastModifiedDate_sec_epoch), now())   
   else daysBetween(toDate(LastActivityDate_sec_epoch), now()) 
end

26. Get Past Due Date - NUMERIC
case when IsClosed == "false" && (toDate(CloseDate_sec_epoch) < now()) then daysBetween(toDate(CloseDate_sec_epoch), now()) else 0 end

27. Get Opportunity Age - NUMERIC
case when IsClosed == "false" then daysBetween(toDate(CreatedDate_sec_epoch), now()) else daysBetween(toDate(CreatedDate_sec_epoch),toDate(CloseDate_sec_epoch)) end

28. Get Lead Age - NUMERIC
case when ('IsConverted' == "false") then daysBetween(toDate(CreatedDate_sec_epoch), now()) else daysBetween(toDate(ConvertedDate_day_epoch), toDate(CreatedDate_day_epoch)) end

29. Get Case Duration - NUMERIC
case when ('IsClosed' == "true") then ('ClosedDate_sec_epoch' - 'CreatedDate_sec_epoch')/86400 else ('CurrentDate_sec_epoch' - 'CreatedDate_sec_epoch')/86400 end

30. Get Opportunity Age - NUMERIC
case
   when ('ConvertedOpportunity.Name' is null) then 0 
   when ('ConvertedOpportunity.IsClosed' == "false") then ('CurrentDate_sec_epoch' - 'CreatedDate_sec_epoch')/86400  
   else ('ConvertedOpportunity.CloseDate_sec_epoch' - 'CreatedDate_sec_epoch')/86400 
end

31. Converting Created Date to PST - DATE
toDate('CreatedDate_sec_epoch'-3600*8)

32. Using starts_with(), ends_with, and lower() to compare string - TEXT
case
  when starts_with(lower(Subject),"call") then "Call"
  when ends_with(lower(Subject),"call") then "Call"
  else "Others"
end
 the one contain full string must be at left, it is case sensitive, so use lower() to help

33. Use matches() for contain - TEXT
case when "abcd" matches "abc" then "found" else "not found" end
 the full sentence must be at left, this operator is not case-sensitive, requires at least two characters

case when 'Product_Name' matches "cable" then "found" else "not found" end 
 this will work

case when "cable" matches 'Product_Name' then "found" else "not found" end 
 this is not allowed with error Invalid function argument: 'Product_Name', the second operand must be text.

case when !('Product_Name' matches "cable") then "a" else "b" end 
 use ! as not

  

Reference:

Einstein Analytics: using Allow disjoint schema to transform dataset

Here is the use case, we have multiple columns for each type to store value, this cause we can't really easily build a chart when the values are spread across many columns.

Solution: to transform the data source by splitting into many rows and using 1 column.



Dataflow



Inside computeExpression computeTYPE1 nodes:

Inside Type_TYPE1 Computed Field:
this is text, which is the field name

Inside Value_TYPE1 Computed Field:
this is numeric, which is the field value


Do the same for computeExpression Type 2 and Type 3. Then, combine all the data using append node


Once we have all the values spread across rows, use sliceDataset transformation to drop the original Type 1, Type 2, Type 3 fields.



Reference




Monday, April 8, 2019

Salesforce: User current app

Question: is there a way to check what is the user current app?

Answer: yes, but only for Lightning.


UserAppInfo
Since API version 38.0, Salesforce introduces an object called UserAppInfo, this object stores the last Lightning app users logged in to.

Sample query: SELECT Id, UserId, AppDefinitionId, FormFactor, CreatedById, CreatedDate, LastModifiedById, LastModifiedDate FROM UserAppInfo WHERE UserId = '00580000004JEfS'


Notes:
- AppDefinitionId: the ID of the last Lightning app that the user logged in to.
- FormFactor: The relative size of the app as displayed, values are:
     Small—suitable for a small device like a mobile phone
     Medium—suitable for a tablet
     Large—suitable for a large display device, like a monitor

Since AppDefinitionId is updateable, this means we can mass update the users' app.


AppDefinition
This object represents the metadata of an app and its navigation items. This object is available in API version 43.0 and later.

Sample query: SELECT DurableId, Label, UiType, Description, DeveloperName, MasterLabel, NavType, UtilityBar FROM AppDefinition ORDER BY Label



Notes:
- DurableId: instead of Id, DurableId represent App Id as in use for UserAppInfo
- UiType: option: Aloha for Classic, and Lightning
- NavType: option: Standard and Console
- UtilityBar: only available for Lightning


Reference:


Friday, March 29, 2019

Salesforce: Finding Reports and Dashboards from Private folder

Use case: unable to delete report because it used in dashboards.

When you try to delete the report, Salesforce returns the following error:

Report cannot be deleted
One or more dashboards depend on this report. Please delete the dashboard components referring to this report and try again. 

The issue is, it does not tell us which dashboard content report that we want to delete.

So, let us find the related dashboards.

1. Create Report Type 
Reports (A) with at least one related record from Dashboard Components (B)
You can add Dashboard information to this report, such as:
- Dashboard ID
- Dashboard Running User (run as specified user, or let authorized users change running user)
- Folder
- Running User (this is viewing user name)
- Running User Active
- Title

You may find in some of the reports, there is no Dashboard info, even the report type is Reports with at least one related record from Dashboard Components, this is pretty confusing, right?

Possibility (1)
The dashboard has been deleted, you are right, however, once the dashboard is deleted (in recycle bin), the system allows you to delete the report.


 If you see from the above screenshot, the first line does not have dashboard info, this because the dashboard is deleted, and system allows me to delete the report, so this does not fit our use case.

Possibility (2)
The dashboard is stored in someone Private folder.


The difference here, we can see dashboard Title and no other info. For this case, we cannot delete the report.




2. Query from Private folders
For the case of reports used as the source of dashboards that stored in someone private dashboard, you need to query from Private folder. You need to have this permission Manage all private reports and dashboards, then you can query dashboard and report in Private folder. You also need to add  'allPrivate' query scope to find Reports and Dashboards in private folders.

To return reports in private folders that haven't been run for more than one year:
SELECT Id, OwnerId FROM Report USING SCOPE allPrivate WHERE LastRunDate < LAST_N_DAYS:365

To query reports inside a specific User's private folder:
SELECT Id FROM Report USING SCOPE allPrivate WHERE OwnerId = '005A0000000Bc2deFG'

To query all dashboards stored in User's private folder:
SELECT Id, Title, FolderName, FolderId, CreatedById, LastModifiedById FROM Dashboard USING SCOPE allPrivate ORDER BY Title 



Note:
For Dashboard:
- You should look at FolderId - this is where the dashboard or report stored.
- The dashboard or report can be created by someone else, so don't look at CreatedById.

For Report:
- Looks for OwnerId, this will tell you who owned the report stored in the private folder.



ReferenceDelete Reports and Dashboards from personal or private folders



Tuesday, March 5, 2019

Einstein Analytics: deployment with Change Set

As Einstein Analytics is deeply integrated with the Salesforce platform, we can deploy Einstein Analytics asset as a Change Set from the Salesforce platform.



Here are a few finding related to Einstein Analytics asset deployment with Change Set:

1. Change Set able to deploy Dataflow to target org, event in the target org is not enable for sync. You need to enable sync for the ability to create dataflow manually in Data Manager.

2. For dashboard and lens deployment, if the app does not exist yet in the target org., you need to deploy the app as a component within the same Change Set, otherwise, the deployment will fail.

3. Change Set will deploy Dataset, but it will not move the data, you need to re-run dataflow or re-export the data, otherwise, the Dataset will not visible in Analytics Studio. However, dashboard and lens will visible in Analytics Studio, but you can't open them until the dataset is visible in Analytics Studio.

4. Change Set able to deploy Lens and Dashboard without Dataset.




ReferenceMigrate Analytics Assets with Change Sets



Sunday, March 3, 2019

Salesforce: Query Fields Permission

In the previous blog Using Permission Set to Query User Permission, we discussed query on PermissionSet and PermissionSetAssignment to query on permissions related to the user permission, at the end of the blog we also introduce query to ObjectPermissions object to get permission related to Object.

In this blog, we are going to introduce another object called FieldPermission. As you know, basic fields accessibility for a user is determined by the user Profile, then extra permission can be given to the user thorough Permission Set. So, a query to FieldPermission will give you an idea of why/how a user able to access a specific field, and what is the permission to that field (Read or Edit).

SELECT Id, ParentId, Parent.Name, SobjectType, Field,PermissionsEdit,PermissionsRead FROM FieldPermissions  WHERE SobjectType = 'Account' AND Field = 'Account.Active__c' ORDER BY Parent.Name

The sample result from the above query:


The main field from above query is ParentId, this field is referred to PermissionSet object, so you see the result of Parent.Name is PermissionSet.Name, the values are contained for both Profile and Permission Set.

For PermissionSet.Name value start with X00e, it is a Profile (includes Standard and Custom profile), while the one not starts with X00e is PermissionSet.

From the above screenshot, let us check if Activate_Contract_2 permission set gives extra permission for field Active__c in Account object:



ReferenceFieldPermissions



Sunday, February 24, 2019

Salesforce: Using Custom Field for Forecasts

This blog only applicable for Collaborative Forecast, at this moment Customizable Forecast is scheduled for Retirement as of Summer ’20.

By default, Salesforce will forecast using Amount field from Opportunity for Revenue forecast. However, to fit your business needs, you can add additional forecast type, based on a custom field in Opportunity, it must be currency fields.

1. Opportunity Split
In this blog, I will add a custom currency field from Opportunity called Extra Income. I'll select the field 'Extra Income' and type in 'Extra' as Split Label.


Tips: on the Split Type, the Totals 100% must be ticked, otherwise, forecast with the custom fields will not work.


2. Forecast Setting
Now, I need to configure forecast setting, click the link "+ Add another forecast type" then select Extra.


Then, you select a Forecast Measurement (Revenue or Quantity) and select fields to show in the Opportunity List.
You need to click Save button to save, otherwise, it will not save, even you have click OK button in many screen.


Forecasts tab
Now, let us see if this will work.

This is the default forecast based on Amount, it called Opportunity Revenue.



Now, let us flip to the new Extra forecast we just create, click the gear icon on the top right.



Now, let us forecast with our new forecast type "Extra"




ReferenceEnable Custom Field Forecasts in Collaborative Forecasts



Salesforce: Sort Report

Create and sort report is simple, but sometimes you will wonder why the report does not produce in the way it should be, example report below:



We sort the report by Brand -- which is a picklist field, but the order is not working properly, I expect it should sort by alphabetically, the same order result we will see in Classic.

Try to convert the report to a summary or matrix report:

summary report


matrix report


What is the cause?
The order of groupings containing Picklist field values is based on how the values are arranged in the picklist field itself, not the arrangement selected in the Sort Order.

The easy fix is by change the picklist value for that field, see this article Sort Picklists, if you need guidance on how to change picklist value.

However, if you don't have admin access, or for some reason, you are not allowed to change the order, you can add bucket field in the report, make sure to create the buckets in the order that you want the picklist values to be displayed on the report.



So, instead of using the original field, use this bucket field in the report.




Here an idea to vote to sort picklist values alphabetically, currently only have 470 points after 9 years.


Reference:



Monday, February 18, 2019

Using Emoji in Salesforce

Nowadays, emoji is widely used from mobile device messaging, email, until more serious business applications such as Salesforce. If you don't have emoji keyboard, you can copy varieties of emoji from the internet, one of the most famous is emojipedia.org



As per this article 5 Ways You Can Use Emoji, you can use emoji in Salesforce, from:
- Chatter Post
- Validation Rules
- Help text
- Formula field
- Picklist values
- Field value
- List view

Emoji in Salesforce works in both Lightning and Chatter.





Reference:

Tuesday, February 5, 2019

Einstein Analytics: Security Predicate for CSV file

Einstein Analytics supports security predicates, a robust row-level security feature that enables you to model many different types of access controls on datasets. Einstein Analytics also supports sharing inheritance, to synchronize with sharing that’s configured in Salesforce, subject to certain limitations. If you use sharing inheritance, you must also set a security predicate to take over in situations when sharing settings can’t be honored.

This blog will discuss setting up security predicate for Dataset created from CSV file. By default, when you load CSV file to create a new dataset, the security predicate will be empty, which mean everyone has access to the dataset can see all rows.

We can build security predicate even for CSV file is not originally come from Salesforce, as long as there is an identifier that links between CSV file with Salesforce data. We can build security predicate after the dataset created in Einstein Analytics.

Syntax
<dataset column> <operator> <value>

Examples
'UserId' == "$User.Id"
  • UserId is the API name of the dataset
  • == is the operator
  • $User.Id this is the current Salesforce User Id when open the dashboard or lens

If you check above basic syntax again, then change the syntax to "$User.Id" == 'UserId', this syntax become invalid and will be rejected by the system. Even the values are the same, but security predicate must always start with dataset column, and not the other way round.

You can use and and or logic in the security predicate
(‘Expected_Revenue’ > 4000 || ‘Stage Name’ == "Closed Won") && ‘isDeleted’ != "False"

Consider the following requirements for the predicate expression:
  • The expression is case-sensitive.
  • The expression cannot exceed 1,000 characters.
  • There must be at least one space between the dataset column and the operator, between the operator and the value, and before and after logical operators. This expression is not valid: ‘Revenue’>100. It must have spaces like this: ‘Revenue’ > 100.

How to create exceptions?
This mean, a group of Salesforce users should not be impacted by security predicate. One of the simple ideas is to add unique values, such as User Role Id, or User Profile Id, or custom field from User object to the dataset security predicate, and to the data itself.

Scenario: all users with Profile = Executive are allowed to see all data, otherwise, only see data the same with user Territory. In this scenario, Territory is a custom field in user object and also available in the Dataset.
1. Get the Profile Id of Executive Profile
2. Add Profile Id from (1) as a column to all rows in CSV file before loading to Einstein Analytics
3. Load the CSV file to Einstein Analytics
4. Edit the dataset created and create security predicate as follow

'Territory' == "$User.Territory__c" || 'Executive_ProfileId' == "$User.ProfileId"

The first part (in purple) is to allow users to see only data where Territory in the dataset is the same with Territory of user defined in the user object.
The second part (in green) allows all users with Executive Profile allow to see all data, that's why we use or logic (||).


Using the same method, you can add Role as exception too, just add another column and fill the Role Id to all rows. However, if you need to define more than 1 profile or 1 role, you need to keep duplicate the columns in CSV file add use || as the exception, e.g. 'Territory' == "$User.Territory__c" || 'Executive_ProfileId' == "$User.ProfileId" || 'Strategy_ProfileId' == "$User.ProfileId", I know this is not a pretty solution, but it works.


You can define dataset security predicate by edit the dataset and enter a valid Security Predicate.

The system will check and reject if the security predicate syntax is invalid, such as "$User.Id" == 'UserId' (wrong order), also if the value does not exist, such as: 'UserId' == "$User.Field__c" (Field__c field is not exist in User object). However, the system will not validate and not reject if the column name does not exist, such as: 'UserField' == "$User.Id" (UserField does not exist in the dataset column).


If you replace data for an existing dataset, the security dataset defined will be stayed, including when you restore from the previous dataset.


Reference:


Friday, January 18, 2019

Einstein Analytics: Understanding Nodes in Monitor

1. CSV File Load
When we create a new dataset using CSV file, here are the items in the Monitor:


Let's understand each item and the Node Type, data flow title sample_data_4 Upload flow - Overwrite:
- sample_data_4 is the dataset name, not CSV file name;
- Upload flow - Overwrite suffix is always the same for all CSV load.

Nodes involved for CSV data load:
  • sfdcFetch
  • csvDigest
  • optimizer
  • sfdcRegister

When we replace dataset with a new CSV file, the title and nodes in Monitor will stay the same.


2A. Simple data fetch from Salesforce
Here we have a simple dataflow with 2 nodes: sfdcDigest and sfdcRegister.

Items in the Monitor:



The title is Dataflow name, and nodes involve in Monitor for this dataflow:
  • sfdcDigest
  • optimizer
  • sfdcRegister

What happened if we add filter conditions to the sfdcDigest node? Will it change the nodes in the monitor? The answer is No, because the filter happens inside sfdcDigest node only.

optimizer is always run before sfdcRegister for each sfdcRegister mode.


2B. Adding augment nodes to Dataflow




Here are nodes in the Monitor:


From the above screenshot, we have another sfdcDigest node for getUser, and augment node.


2C. Adding sliceDataset and filter nodes to Dataflow



Here are nodes in the Monitor:


Now we have addition nodes: sliceDataset and filter as per order in the data flow.


2D. Add edgemart and computeExpression nodes to Dataflow


Here are nodes in the Monitor:


Edgemart note start first and computeExpression node is run after augmentAccount_User, so this order as per dataflow. From the screenshot edgemart and computeExpression also run sliceDataset node with name DropSharingRulesFrom-, further check, this DropSharingRulesFrom- is randomly added, it can be appeared for sfdcDigest or augment node too, I am still checking what is the caused.


3. Trend Salesforce Report
Next, let us see how Trend from Salesforce report to Einstein Analytics. When you setup Trend for the first time from Salesforce report, it will run once to create dataset and dashboard, this activity happens before the scheduled date/time.

Sample from Monitor:


There are only 3 nodes here:
  • sfdcFetchReport
  • optimizer
  • sfdcRegister

But, when the scheduler running, this is the nodes:



Let us see each node:
  • edgemart - to read existing dataset
  • sfdcFetchReport - to get data from Salesforce
  • let us ignore DropSharingRulesFrom
  • append - to add existing dataset data read from edgemart, with new data from sfdcFetchReport
  • optimize and register by overwrite the dataset


4. Recipe with Append

This is a simple recipe to add a dataset to another dataset and produce new dataset.



When we run the recipe, here are nodes in the Monitor:




Let us see each node:
  • edgemart from append (new) table and edgemart from root (based) table
  • let us ignore DropSharingRulesFrom
  • two computeExpression nodes
  • append transformation node
  • slideDataset transformation node
  • optimizer and sfdcRegister nodes



Page-level ad