Pages

Wednesday, June 13, 2018

Einstein Analytics: Multi-Select Picklist to Text

As per Summer '18 release, Einstein Analytics is not really friendly with Multi-Select Picklist field. One of the issues, when you have multi-values in a record, it will show only the 1st value when you show the data as Value Table in Lense or Table wizard in a Dashboard.

Service Type contains multi-values in Salesforce


Service Type only show the 1st value in EA


As per this document, we can customize JSON dataflow to treat multi-select values as text. Once, you have the dataflow built:

  • Download the JSON file
  • Edit the file and add ,"isMultiValue": false after the field name and save it (you should backup the original JSON file)
  • Upload back the edited JSON file to Dataflow



Here is the result the multi-select values show as text in Einstein Analytics



Note: if you do not update the JSON dataflow as above, the data will flow into EA as multi-values, and it will work if you use it as list filter as an independent value, but the record selection will work as if the multi-values for the record.


ReferenceLimitations of Multi-Value Fields in Einstein Analytics



Sunday, June 10, 2018

Einstein Analytics: Getting started with cogroup

You can combine data from two or more data streams into a single data stream using cogroup. The data streams must have at least one common field. Only data that exists in both groups appear in the results.

example:
qs = cogroup qsd by 'State', qsp by 'State';
in this sample, data stream qsd contain field State and data stream qsp also contain field State, we can use it for grouping.

q = cogroup ops by 'Account', meetings by 'Company';
Account in ops data stream will have the same value with Company in meetings data stream.


Use case: show death per state percentage from 2 datasets.



Let's use cogroup to combine the dataset:
dsd = load "StateDeath2";
dsp = load "StatePopulation2";
ds = cogroup dsd by 'State', dsp by 'State';
ds = foreach ds generate dsp.'State' as 'State', sum(dsp.'Count') as 'Population', sum(dsd.'Count') as 'Death', (sum(dsd.'Count')/sum(dsp.'Count')*100) as 'Death (%)';

The Result


Let's try to use Opportunity and User datasets from our previous blog.
dsu = load "user1";
dso = load "opportunity";
ds = cogroup dsu by 'Id', dso by 'OwnerId';
ds = foreach ds generate first(dsu.Name) as 'Name', sum(dso.Amount) as 'Sum_Amount';
ds = order ds by Name;


If you notice, Angela is not shown on that chart, because she do not have any Opportunity records. Remember that only data that exists in both groups appear in the results.


Reference:




Saturday, June 9, 2018

Einstein Analytics: Opportunity Dashboard with SAQL Union to show all User

Let's go straight to the business requirements, "show all sales rep with their total sales amount, if there is no opportunity owned by the sales rep, sales rep name must be shown with $0."

High-level solution:
1. Load User master data
2. Load Opportunity data
3. Use UNION to combine the dataset

User data


Opportunity data


Build User master Lens group by Id and Name
q = load "user1";
q = group q by ('Id', 'Name');
q = foreach q generate 'Id' as 'Id', 'Name' as 'Name', count() as 'count';
q = order q by ('Id' asc, 'Name' asc);
q = limit q 2000;

Let's modify necessary SAQL:
1. Rename all q data stream to dsu -- for easier identifier and uniqueness
2. Rename projected 'Id' to 'User_Id', and 'Name' to 'User_Name' -- I'll tell you the reason later
3. Remove 'count' as we do not need it -- User_Id is unique
4. Add 'sum_Amount' with 0 in foreach -- I'll tell you the reason later
5. Remove limit

Here is the result
dsu = load "user1";
dsu = group dsu by ('Id', 'Name');
dsu = foreach dsu generate 'Id' as 'User_Id', 'Name' as 'User_Name', 0 as 'sum_Amount';
dsu = order dsu by 'User_Id';


Build Opportunity Lens group by OwnerId
q = load "opportunity";
q = group q by 'OwnerId';
q = foreach q generate 'OwnerId' as 'OwnerId', sum('Amount') as 'sum_Amount';
q = order q by 'OwnerId' asc;
q = limit q 2000;

Let's modify necessary SAQL:
6. Rename all q data stream to dso -- for easier identifier and uniqueness
7. Rename projected 'OwnerId' to 'User_Id' -- I'll tell you the reason later
8. Add 'User_Name' with "-" in foreach -- I'll tell you the reason later
9. Remove limit

Here is the result
dso = load "opportunity";
dso = group dso by 'OwnerId';
dso = foreach dso generate 'OwnerId' as 'User_Id', "-" as 'User_Name', sum('Amount') as 'sum_Amount';
dso = order dso by 'User_Id';


Combine the dataset with UNION
final = union dsu,dso;
final = group final by ('User_Id');
final = foreach final generate first('User_Name') as 'User_Name', sum('sum_Amount') as 'sum_Amount';


The Complete SAQL
dsu = load "user1";
dsu = group dsu by ('Id', 'Name');
dsu = foreach dsu generate 'Id' as 'User_Id', 'Name' as 'User_Name', 0 as 'sum_Amount';
dsu = order dsu by 'User_Id';

dso = load "opportunity";
dso = group dso by 'OwnerId';
dso = foreach dso generate 'OwnerId' as 'User_Id', "-" as 'User_Name', sum('Amount') as 'sum_Amount';
dso = order dso by 'User_Id';

final = union dsu,dso;
final = group final by ('User_Id');
final = foreach final generate first('User_Name') as 'User_Name', sum('sum_Amount') as 'sum_Amount';


The Moment of Truth



Explanation
  • we rename Id and Name in step (2) to have the same column name with step (7)  and (8)
  • we add 'sum_Amount' in step (4) to have the same column name with dso data stream
  • for our use case, we get the dataset aligned with the same column by adding dummy columns before using UNION to both data stream
  • In the last row, we use aggreagate function first() to return the first user name, as our union  start with dsu which contain user name, while dso at the second/last will always contain "-" for user name, see step (8)
  • In the last row, we also sum the 'sum_Amount' again, practically this is sum the 'sum_Amount'  with 0, remember we add 0 as dummy value in step (4)

Make it simple
Since sum_Amount always 0 in dsu, and User_Name always "-" in dso, we can just simply not need to add them to the data stream, and we will still get the same result, let's remove the unnecessary statement.

dsu = load "user1";
dsu = group dsu by ('Id', 'Name');
dsu = foreach dsu generate 'Id' as 'User_Id', 'Name' as 'Name';

dso = load "opportunity";
dso = group dso by 'OwnerId';
dso = foreach dso generate 'OwnerId' as 'User_Id', sum('Amount') as 'sum_Amount';

final = union dsu,dso;
final = group final by ('User_Id');
final = foreach final generate first('Name') as 'Name', sum('sum_Amount') as 'sum_Amount';
final = order final by 'Name';




Reference:


Monday, June 4, 2018

Einstein Analytics: Filter Conditions in sfdcDigest

When you do not need to bring the whole data from a Salesforce object to Einstein Analytics (EA), you can filter the data when retrieving it in sfdcDigest.

With filter out unnecessary data flow to EA, this will help our dashboard builders do not need to keep filtering out certain unused data, such as inactive users for the User object. Filtering data also will increase performance on the Dataflow and consume less total of records stored in EA.

In this sample, I want to bring in all active users and not included Chatter users. So, there are 2 filters need to add in sfdcDigest:
- isActive = true
- userType = "Standard"

If you come from Salesforce background, you can simply enter isActive = true && userType = Standard, but this is different in EA. Until Summer '18 release, you need to manually enter this filter conditions in JSON format.

Add this filter in sfdcDigest under Filter Conditions:
[
{"field":"isActive","isQuoted":true,"value":"true","operator":"="},{"field":"UserType","value":"Standard","operator":"="}
]

If you see the first filter, isActive is a boolean field, so passing just true without "" will get an error when running the Dataflow, so we need to add "isQuoted": true, this is also applicable for a numeric and date field.

Let's see this in JSON Dataflow:



For the complete reference, check out this documentation Structured Filter in sfdcDigest Transformation.




Thursday, May 31, 2018

Salesforce: Files sharing Query

More than 3 years ago, I wrote a blog Content Architecture sharing about how to query and the content still valid till now. This blog would be the continuation on how to query the File is sharing to who or what, and the level of visibility.

There is an object called ContentDocumentLink (with prefix 06A), this object is a child of ContentDocument (prefix 069). This object represents the link between a Salesforce CRM Content document or Salesforce file and where it's shared. A file can be shared with other users, groups, records, and Salesforce CRM Content libraries. This object is available in versions 21.0 and later for Salesforce CRM Content documents and Salesforce Files.

To query this object, it must be filtered on one of Id, or ContentDocumentId, or LinkedEntityId. Let's see some samples:

SELECT Id, ContentDocumentId, LinkedEntityId FROM ContentDocumentLink 
This query will throw an error because of not filter on anything.
MALFORMED_QUERY: Implementation restriction: ContentDocumentLink requires a filter by a single Id on ContentDocumentId or LinkedEntityId using the equals operator or multiple Id's using the IN operator.


To query for a file is sharing to
SELECT Id, ContentDocumentId, ContentDocument.Title, LinkedEntityId, LinkedEntity.Name, LinkedEntity.Type, ShareType, Visibility FROM ContentDocumentLink WHERE ContentDocumentId = '0690k000000TzTMAA0'



To query all files shared to a record
SELECT Id, ContentDocumentId, ContentDocument.Title, LinkedEntityId, LinkedEntity.Name, LinkedEntity.Type, ShareType, Visibility FROM ContentDocumentLink WHERE LinkedEntityId = '0010k00000IKS3YAAX'



To query all files shared to an object
SELECT Id, ContentDocumentId, LinkedEntityId FROM ContentDocumentLink WHERE LinkedEntityId in (SELECT Id FROM Account)




ReferenceContentDocumentLink



Tuesday, May 15, 2018

Einstein Analytics: Setting Initial Selection

Adding List Selector or Date Selector in Einstein Analytics is pretty simple, just a few clicks, and you will get the result. However, if you notice at the bottom of Step Properties, there is a textbox for Initial Selection, this is to set the initial value for the List when running the dashboard, but we cannot set it from the UI.

When you add a widget, by default, it will be "No Selections".



To set value for Initial Selections, edit JSON dashboard and looks for steps, add "start": with the value. Make sure the value is available in the list.


Now, back to the UI, and check the Initial Selections



Preview the dashboard and check the initial value will be the same as the value we put in JSON.



We can do the same for Date selector. Edit JSON and looks for the related step. Below sample, if you would like to set the date range with current quarter.



This is the result when you run the dashboard, as we set in JSON, initial selection will be from current quarter to current quarter, you can change it to year, month, week, or day, or also using the absolute date.



However, if you hate to edit JSON code for this you can set the initial value from UI too, click ... button next to save icon and select Pick Initial Selections, then select date selector or range selector or list selector to set the initial value, this activity will edit the same JSON at the back end.