Pages

Monday, February 18, 2019

Using Emoji in Salesforce

Nowadays, emoji is widely used from mobile device messaging, email, until more serious business applications such as Salesforce. If you don't have emoji keyboard, you can copy varieties of emoji from the internet, one of the most famous is emojipedia.org



As per this article 5 Ways You Can Use Emoji, you can use emoji in Salesforce, from:
- Chatter Post
- Validation Rules
- Help text
- Formula field
- Picklist values
- Field value
- List view

Emoji in Salesforce works in both Lightning and Chatter.





Reference:

Tuesday, February 5, 2019

Einstein Analytics: Security Predicate for CSV file

Einstein Analytics supports security predicates, a robust row-level security feature that enables you to model many different types of access controls on datasets. Einstein Analytics also supports sharing inheritance, to synchronize with sharing that’s configured in Salesforce, subject to certain limitations. If you use sharing inheritance, you must also set a security predicate to take over in situations when sharing settings can’t be honored.

This blog will discuss setting up security predicate for Dataset created from CSV file. By default, when you load CSV file to create a new dataset, the security predicate will be empty, which mean everyone has access to the dataset can see all rows.

We can build security predicate even for CSV file is not originally come from Salesforce, as long as there is an identifier that links between CSV file with Salesforce data. We can build security predicate after the dataset created in Einstein Analytics.

Syntax
<dataset column> <operator> <value>

Examples
'UserId' == "$User.Id"
  • UserId is the API name of the dataset
  • == is the operator
  • $User.Id this is the current Salesforce User Id when open the dashboard or lens

If you check above basic syntax again, then change the syntax to "$User.Id" == 'UserId', this syntax become invalid and will be rejected by the system. Even the values are the same, but security predicate must always start with dataset column, and not the other way round.

You can use and and or logic in the security predicate
(‘Expected_Revenue’ > 4000 || ‘Stage Name’ == "Closed Won") && ‘isDeleted’ != "False"

Consider the following requirements for the predicate expression:
  • The expression is case-sensitive.
  • The expression cannot exceed 1,000 characters.
  • There must be at least one space between the dataset column and the operator, between the operator and the value, and before and after logical operators. This expression is not valid: ‘Revenue’>100. It must have spaces like this: ‘Revenue’ > 100.

How to create exceptions?
This mean, a group of Salesforce users should not be impacted by security predicate. One of the simple ideas is to add unique values, such as User Role Id, or User Profile Id, or custom field from User object to the dataset security predicate, and to the data itself.

Scenario: all users with Profile = Executive are allowed to see all data, otherwise, only see data the same with user Territory. In this scenario, Territory is a custom field in user object and also available in the Dataset.
1. Get the Profile Id of Executive Profile
2. Add Profile Id from (1) as a column to all rows in CSV file before loading to Einstein Analytics
3. Load the CSV file to Einstein Analytics
4. Edit the dataset created and create security predicate as follow

'Territory' == "$User.Territory__c" || 'Executive_ProfileId' == "$User.ProfileId"

The first part (in purple) is to allow users to see only data where Territory in the dataset is the same with Territory of user defined in the user object.
The second part (in green) allows all users with Executive Profile allow to see all data, that's why we use or logic (||).


Using the same method, you can add Role as exception too, just add another column and fill the Role Id to all rows. However, if you need to define more than 1 profile or 1 role, you need to keep duplicate the columns in CSV file add use || as the exception, e.g. 'Territory' == "$User.Territory__c" || 'Executive_ProfileId' == "$User.ProfileId" || 'Strategy_ProfileId' == "$User.ProfileId", I know this is not a pretty solution, but it works.


You can define dataset security predicate by edit the dataset and enter a valid Security Predicate.

The system will check and reject if the security predicate syntax is invalid, such as "$User.Id" == 'UserId' (wrong order), also if the value does not exist, such as: 'UserId' == "$User.Field__c" (Field__c field is not exist in User object). However, the system will not validate and not reject if the column name does not exist, such as: 'UserField' == "$User.Id" (UserField does not exist in the dataset column).


If you replace data for an existing dataset, the security dataset defined will be stayed, including when you restore from the previous dataset.


Reference:


Friday, January 18, 2019

Einstein Analytics: Understanding Nodes in Monitor

1. CSV File Load
When we create a new dataset using CSV file, here are the items in the Monitor:


Let's understand each item and the Node Type, data flow title sample_data_4 Upload flow - Overwrite:
- sample_data_4 is the dataset name, not CSV file name;
- Upload flow - Overwrite suffix is always the same for all CSV load.

Nodes involved for CSV data load:
  • sfdcFetch
  • csvDigest
  • optimizer
  • sfdcRegister

When we replace dataset with a new CSV file, the title and nodes in Monitor will stay the same.


2A. Simple data fetch from Salesforce
Here we have a simple dataflow with 2 nodes: sfdcDigest and sfdcRegister.

Items in the Monitor:



The title is Dataflow name, and nodes involve in Monitor for this dataflow:
  • sfdcDigest
  • optimizer
  • sfdcRegister

What happened if we add filter conditions to the sfdcDigest node? Will it change the nodes in the monitor? The answer is No, because the filter happens inside sfdcDigest node only.

optimizer is always run before sfdcRegister for each sfdcRegister mode.


2B. Adding augment nodes to Dataflow




Here are nodes in the Monitor:


From the above screenshot, we have another sfdcDigest node for getUser, and augment node.


2C. Adding sliceDataset and filter nodes to Dataflow



Here are nodes in the Monitor:


Now we have addition nodes: sliceDataset and filter as per order in the data flow.


2D. Add edgemart and computeExpression nodes to Dataflow


Here are nodes in the Monitor:


Edgemart note start first and computeExpression node is run after augmentAccount_User, so this order as per dataflow. From the screenshot edgemart and computeExpression also run sliceDataset node with name DropSharingRulesFrom-, further check, this DropSharingRulesFrom- is randomly added, it can be appeared for sfdcDigest or augment node too, I am still checking what is the caused.


3. Trend Salesforce Report
Next, let us see how Trend from Salesforce report to Einstein Analytics. When you setup Trend for the first time from Salesforce report, it will run once to create dataset and dashboard, this activity happens before the scheduled date/time.

Sample from Monitor:


There are only 3 nodes here:
  • sfdcFetchReport
  • optimizer
  • sfdcRegister

But, when the scheduler running, this is the nodes:



Let us see each node:
  • edgemart - to read existing dataset
  • sfdcFetchReport - to get data from Salesforce
  • let us ignore DropSharingRulesFrom
  • append - to add existing dataset data read from edgemart, with new data from sfdcFetchReport
  • optimize and register by overwrite the dataset


4. Recipe with Append

This is a simple recipe to add a dataset to another dataset and produce new dataset.



When we run the recipe, here are nodes in the Monitor:




Let us see each node:
  • edgemart from append (new) table and edgemart from root (based) table
  • let us ignore DropSharingRulesFrom
  • two computeExpression nodes
  • append transformation node
  • slideDataset transformation node
  • optimizer and sfdcRegister nodes



Sunday, January 13, 2019

Einstein Analytics: Using SOQL

So far, we all know, in building dashboards in Einstein Analytics, we need to bring the data into Einstein Analytics and stored as Dataset. In this blog, I will share how to directly get data from Salesforce using SOQL, this means we can create a chart wizard or table in  Einstein Analytics with Salesforce live data.

You need to know basic JSON dashboard in Einstein Analytics. After you create the dashboard, create a step with the type = soql, e.g.
 "soql_step_name": {  
  "type": "soql",  
  "query": "SELECT Name from ACCOUNT",  
  "strings": ["Name"],  
  "numbers": [],  
  "groups": [],  
  "selectMode": "single"  
 }  

Once the step added, you can use it any wizard. The isFacet and useGlobal properties don't apply to this step type. You can use a binding to filter other steps based on a selection in a soql step.

Let's see more samples:
 "soql1": {  
         "type": "soql",  
         "query": "SELECT Id,Name,NumberOfEmployees,Type from ACCOUNT",  
         "strings": [  
           "Type",  
           "Id",  
           "Name"  
         ],  
         "numbers": [  
           "NumberOfEmployees"  
         ],  
         "groups": [],  
         "selectMode": "single"  
       }  
 "soql2": {  
         "groups": [],  
         "numbers": [  
           "foo"  
         ],  
         "query": "SELECT count(id) foo from ACCOUNT",  
         "selectMode": "single",  
         "strings": [],  
         "type": "soql"  
       }  
  "soql3": {  
         "type": "soql",  
         "query": "SELECT NumberOfEmployees,Name,Type from ACCOUNT",  
         "strings": [  
           "Type",  
           "Name"  
         ],  
         "numbers": [  
           "NumberOfEmployees"  
         ],  
         "groups": [  
           "Type"  
         ],  
         "selectMode": "single"  
       }  

Notes:
- same as the normal step in JSON, the order of parameters will be ignored
- type parameter is "soql"
- query parameter must be valid soql and contain all fields needed
- fields from query result should be put under strings or numbers parameter
- groups parameter is optional, but needed when you have grouping in the wizard


Here is the wizard result from each step above:

step soql1


step soql2


step soql3



Reference:


Analytics on Einstein Activity Capture using Einstein Analytics

As per this article Guidelines for Capturing Email and Events with Einstein Activity Capture, Einstein Activity Capture aren’t stored in Salesforce, so they don’t show up in standard or custom Salesforce reports. However, Einstein Activity Capture provides access to the Activities dashboard, which is built on Einstein Analytics. The Activities dashboard summarizes sales activities, including activities added with Einstein Activity Capture. This only available for Enterprise, Performance, and Unlimited Editions.



After Einstein Activity Capture is enabled, the Activities dashboard is created. If you don’t see the dashboard after 24 hours, go to the Einstein Activity Capture setting page. Turn off Einstein Activity Capture, and then turn it on again. A dataflow named Activities will be created in Einstein Analytics together with Activities dashboard, check and make sure it is scheduled.



Once the dataflow runs, it will create a dataset called Activities.



The activity data that users see in the Activities dashboard depends on whether you use role hierarchy. If you use role hierarchy, users see data for only activities that they’re involved with and that users below them in the role hierarchy own. If you don’t use role hierarchy, users see data for all activities in the dataset.

The Activities dashboard in Einstein Analytics provides a summary of sales activities that were added to Salesforce manually and by Einstein Activity Capture. By looking at Activities dashboard, we can't really differentiate or filter only activities related to Einstein Activity Capture, so let us dig further.

Here the step to filter only Einstein Activity Capture data from Activities dataset:
1. Open Activities dataset and this will create a new Lens
2. Change the Lens mode from chart to values Table mode
3. Filter by Source Id where IS NOT Equal to "core:events" and "core:tasks"



Few fields to take note from this table:
- Activity Type: the activity is Email or Event
- Account Id, Account Name: if the activity tagged to an Account
- Related Id, Related Type, Related Record: if the activity related to a specific object
- Person Id, Person Type, Customer: the person tagged to the activity
- User Id, User: internal user owned that activity


Reference:

Wednesday, January 9, 2019

Einstein Analytics: Using Salesforce Report Trend with Dataflow

If you are new with Einstein Analytics, probably you will shock that not all objects can be retrieved from Salesforce to Einstein Analytics platform, even the objects are supported by SOQL, check out this article for objects are not supported. On top of that, there is also known issues when retrieving Salesforce supported objects, such as here and here.

Furthermore, there are Salesforce report only fields, these fields are not available in the object, such as Last Stage Change Date, Stage Duration, Is Split in Opportunity report, Last Activity, Unread by Owner in Lead and many more.

How we can bring this information from Salesforce to Einstein Analytics? The answer is using Trend. But, as you know, trend will keep adding data to the dataset, while the requirement here is to have the exact number of rows between Salesforce with Einstein Analytics.

Solution: let us build a dataflow to manipulate the dataset produced by the trend.

1. Trend Salesforce Report
There is nothing fancy here, just open the report and click Trend button, you need to specify the Dataset name, Dashboard title, Schedule Frequency, Days, and Time. Once you click the Trend button, the system will run this in the next few minutes so you can see the dataset and dashboard created in Einstein Analytics.

2. Open Dataset
Edit dataset created by Trend, and take note the dataset API name.



3. Create Dataflow
Here is the logic of dataflow:
a). Read Dataset created by Trend, e.g. Dataset-Source

b). Get other objects from Salesforce as necessary, and augment with Dataset-Source, you can do all the data manipulation here.

c). Register the data as a new dataset, e.g. Dataset-Target

d). Clean Dataset-Source, you can use Filter transformation using filter where the result would be blank.

e). Register clean data where Alias = dataset API name from step 2 above, and Name equal to Dataset Name. This step with the alias and name is very important, this will make sure to overwrite trend data with blank data.


f). Schedule the dataflow, make sure it only runs after Trend schedule, to be safe, you can make a 1-hour difference between the schedule.


A sample of a dataflow using this solution:




Reference:



Page-level ad