Pages

Sunday, March 29, 2020

Salesforce: EmailMessage object

When Enhanced Email is enabled, Salesforce will create EmailMessage object. Emails sent from Salesforce are saved as Email Message records and Task records. There is a link from the Email Message record to a Task record, which is ActivityId field.

If you use Outlook panel (and not enable EAC), you can "Log Email" manually to Salesforce for email received and sent out. Both emails received and sent will be stored both as EmailMessage record and Task record.

How to differentiate email send from Salesforce and manually logged from an email client?
You will not find any difference on the Task object, both Type and TaskSubType will populate with "Email". But there are some differences in Email Message object. Check out this query:
SELECT Id, ActivityId, FromAddress, ToAddress, FromName, IsClientManaged, MessageIdentifier, Subject, TextBody FROM EmailMessage


Row 1,4,5 - email manual log from Outlook
Row 2,3 - email sent from Salesforce

As you can see, IsClientManaged and MessageIdentifier are different.


Note: using My Email to Salesforce service (BCC) will not create Email Message record, but only Task.



Reference:



Monday, March 23, 2020

Einstein Analytics: using Flatten node to get Account Parent

Here is our scenario, we have multi-level account hierarchy, the sample of accounts for this blog:



Use case 1: display all accounts and their opportunities when the top parent is selected.

Dataflow:


We need to manually edit the Flatten node from JSON, by default, the Multi Field and Path Field fields are created as system fields, which aren’t visible in the user interface. To make the fields appear in the user interface and dataflow, add a schema section to the flatten transformation and set the IsSystemField metadata attribute to false for each field in the transformation.

"Flatten_UltimateParent": {
    "schema": {
      "objects": [
        {
          "label": "Flatten_UltimateParent",
          "fields": [
            {
              "name": "UltimateParentPath",
              "label": "UltimateParentPath",
              "isSystemField": false
            },
            {
              "name": "AccountParentIds",
              "label": "AccountParentIds",
              "isSystemField": false
            }
          ]
        }
      ]
    },
    "action": "flatten",
    "parameters": {
      "include_self_id": true,
      "multi_field": "AccountParentIds",
      "path_field": "UltimateParentPath",
      "source": "getAccount",
      "self_field": "Id",
      "parent_field": "ParentId"
    }
  }


Unfortunately, we will not see the schema in the dataflow UI.

We also need to Connect Data Source between Account and Opportunity using the Ultimate Parent Name.



Here is the dashboard:



Note:

1) Multi_Field from the flatten nodes will contain self Account Id (we select "Include Self ID" in the flatten node) and all the parents' Id.

Notice that top Parent Id 0018000001BNnOPAA1 is stored in ALL hierarchy, while Account F Id only stored on itself Account F and it child Account G. Don't be trick when AccountParentIds only show 1 value, because this is a multi-values field.

2) Path_Field will show all hierarchy from self Id, up to parent Id and all the way to the top level.

* I use Image (under Show Data As) to show the full length of field content of UltimateParentPath column


Use case 2: display all accounts and their opportunities when an Account is selected, the account could be the top, middle, or lower level in the account hierarchy.

Dataflow: let us modify existing dataflow to below:



We just need to add an augment node Account Name from Multi_Field AccountParentIds.



You also need to Connect Data Source between Account and Opportunity using the Account Parent Name from the augment node.



Here is the dashboard:



When select Account F, the dashboard will filter to Account F and the child accounts.


Referenceflatten Parameters



Saturday, March 14, 2020

Einstein Analytics: Dataflow Performance Best Practice

Performance is critical for Einstein Analytics dataflow, e.g. an optimized dataflow may take only 10 minutes, while the same dataflow with a poor design may take 1 hour (this includes sync setup) to run. Therefore, without great architected dataflows, it will be hard to maintain and sustain Einstein Analytics as a whole, as the company evolved.

Here are a few items noted based on my personal finding/experience, if you have additional inputs or a different perspective, feel free to reach me.


1. Combine all computeExpression nodes whenever possible

image-1



image-2

calcURI node in image-1 contains 1 compute field return Numeric, the same for calURI2 node also contains 1 other compute field return Numeric, a total of calcURI1 + calURI2 = 3:41 sec.

In image-2, we combined both compute field into calcURI node, and it only took 2:0 sec.


2. Do compute as early as possible, and augment as late as possible

The rationale behind this is, compute node will process lesser fields before augment (as augment always adding fields to the stream), unless you need the field from the augment node for computation.


3. Remove all unnecessary fields

In most of my experience, a dataflow usually a dashboard or clone of a dashboard. The more fields handled by each node will need more power and time, so slice out unnecessary fields if they are not needed in the dashboard or lens.

image-3

Notice that calcURI3 in image-1 and image-2 took around 2:08 sec. In image-3, we add a slice node before calcURI3 to remove unnecessary fields, this reduces the number of fields processed in calcURI3, therefore it took only 1:55 sec.


4. Combine all sfdcDigest nodes of the same object to a node, if sync is not enabled

For some reason, your org. maybe not enable for sync, this does not mean you "must" enable straight away, and please DO NOT enable it without a complete analysis, as this may cause data filtering issue.

You should combine all sfdcDigest nodes of the same object into a node, imagine if you have 10 millions row of opportunity, every sfdcDigest nodes take 10 minutes (as an example), and if the dataflow designer adds 3 sfdcDigest nodes of opportunity, the data retrieve itself will need 30 minutes.





Thursday, March 12, 2020

Einstein Analytics: Precision and Scale

Precision and Scale are important and required for computeExpression node that returns Numeric in Dataflow, otherwise, your dataflow rum will fail.

For numeric, as per this article External Data Metadata Format Reference
  • precision: the maximum number of digits in a numeric value, includes all numbers to the left and to the right of the decimal point (but excludes the decimal point character). Value can be up to 18.
  • scale: the number of digits to the right of the decimal point in a numeric value, must be less than the precision value.

But in short:
  precision: must be 1 - 18
  scale: must be 0 - 17 and less than the precision value


Let us see how this works in reality. I'll do a few same calculations on computeExpression, but with different precision and scale, the formula is A/B for all calculations, here is the result:



Calc_10_5 mean, precision = 10 and scale = 5, and etc. At a glance, you may think that all decimal points do not exist, this is incorrect as you need to "format numbers" on the widget or metadata.

For this blog testing, I set 5 digits decimal point:



Here is the result after all fields set with 5 decimal points:



From the above table, "scale" shows the difference in the calculation result, the result will be round up or round down based on the decimal point defined in the scale.

Notice that decimal point "below 0.5" will be round down, while "0.5 and above" will be round up. But, if scale = 0, all decimal points will be round down, see calc_10_0.



Reference: External Data Metadata Format Reference




Tuesday, January 14, 2020

Einstein Analytics: Using EdgeMart object from Salesforce Direct

In Winter '20 release, Einstein Analytics introduces Salesforce Direct, read this release notes for complete info of Salesforce Direct.

However, Salesforce Direct offers you to get data more than just Salesforce objects, but data in Einstein Analytics too, one of them is EdgeMart, please not to confuse with the edgemart node in Dataflow.

Let's have hands-on, make sure you are in the Production org., EdgeMart object does not available in the sandbox at this moment.

1. Create a new Dashboard
2. Click Create Query button (if you do not see the button), click the blank canvas
3. Select Salesforce Direct as the data source
4. Type EdgeMart in search box
5. Select EdgeMart


6. Now you will be presented with Untitled Query with a bar chart with a count of rows, this row represent the number of datasets you have.

7. You can modify to table mode when as your needs.


The table above shows where is the dataset located, created by, last modified by, data refresh date, etc.


ReferenceShow your data's refresh date with Salesforce Direct



Friday, January 10, 2020

Einstein Analytics: Grouping in Dataflow

After the blog to transpose data from columns to rows, and from rows to columns. Today I have another challenge to group data based on a date.

Here is the data


I know the recipe offers this functionality to group data easily, however, I am reluctant to put a recipe in between of two dataflows, as it will cause maintenance nightmare in the future.



But, can we do this in dataflow? Dataflow does not offer data grouping by default, but as still we can achieve with it some tricks. Here we go:


The key node here is just cr1 which is a computeRelative node. I add 4 fields here:
- Sum_1
- Sum_2
- Sum_3
- IsLast

1. Partition the data with Date



2. For fields Sum_1 to Sum_3, choose SAQL (not Source Field), the Type should be Numeric and remember to enter Scale and Default Value. 
Here is the SAQL Expression 
case when previous(Sum_1) is null then current(Data_1) else current(Data_1) + previous(Sum_1) end


3. For IsLast, choose SAQL (not Source Field), the Type should be Text. Here is the SAQL Expression
case when next(Data_1) is null then "Yes" else "No" end

data after computeRelative, before cleanup


4. Delete unused rows with Filter node and unused columns with Slice node.



In another scenario, if you just need to count items in a group, change Data_1 to 1.
e.g. Count_1 is the field name in CR node
case when previous(Count_1) is null then 1 else 1 + previous(Count_1) end


Reference:


Sunday, January 5, 2020

SimplySfdc in 2019


Happy New Year 2020! To follow yearly tradition, I would like to share some statistics of SimplySfdc in 2019 -- here is the statistic for 2018.

In 2019, lesser blogs where written compare to 2018 (41% less), but the total Pageviews and total Sessions increase.

Page 2019 2018 2017 change*
Total New Page 37 63 34 -41.27%
Total Pageviews 210,213 171,249 115,744 22.75%
Total Sessions 185,396 149,574 97,310 23.95%
Pages / Session 1.13 1.14 1.19 -0.88%
* compare 2019 to 2018


Similar to previous years, organic search always contributes to the largest portion of traffic source in 2019, and this year it crosses the 70% mark. Direct traffic, referral and social weaker in 2019.

Channel Source 2019 2018 2017
1. Organic Search 74.06% 69.63% 67.66%
2. Direct 23.35% 27.15% 25.64%
3. Referral 1.72% 1.76% 4.09%
4. Social  0.82% 1.46% 2.60%


Google, as always is the king of search engine globally, the same applies to SimplySfdc in 2019, it contributes more than 96% of traffic, which is more than 1% lower compared to 2018. Bing and Yahoo gain a slight more percentage, while other search engines show an increase in contribution too.

Top Search Engine 2019 2018 2017
1. Google 96.09% 97.48% 96.46%
2. Bing 2.61% 2.02% 2.66%
3. Yahoo 0.56% 0.47% 0.77%
4. Other 0.74% 0.03% 0.11%


Twitter as the predominant social media for Salesforce #Ohana stays at the 1st spot by contributing more than 39% traffic for SimplySfdc. LinkedIn and Blogger contribution a huge increase for more than 16% compared to 2017.

Top Social Media Source 2019 2018 2017
1. Twitter (#1 in 2018) 39.05% 56.50% 26.42%
2. LinkedIn (#3 in 2017)  31.57% 15.65% 44.81% 
3. Blogger (#4 in 2017) 20.21% 4.17% 4.61%
4. Facebook (#2 in 2017) 5.57% 18.00% 7.15%
5. Others 3.60% 5.68% 17.01%
* in 2017 - StackExchange contribute 7.29% and Google+ contribute 7.11%


There is no change in the top 6 countries of the visitors, however, US visitors gain almost 2% of the total number of visitors. All other countries contribute lesser visitors in terms of the percentage of visitors. The top six countries represent almost 79 % of the visitors.

Top Visitor Country 2019 2018 2017
1. United States 43.60% 41.84% 49.45%
2. India 22.07% 23.88% 19.56%
3. United Kingdom  4.50% 4.95% 4.61%
4. Australia 3.20% 3.30% 2.58%
5. Canada 2.85% 2.92% 2.38%
6. France  2.09% 2.18% 1.90%


From the cities' perspective, Chicago out of nowhere takes the 1st spot in 2019, while it was not in the top 10 in 2018, I guess probably this is related to intensive Einstein Analytics posts in 2019. New York climb to spot #4, and London drop to #6.  The top 6 cities represent almost 22% of the total visitors.

Top Visitor City 2019 2018 2017
1. Chicago 6.60% n/a  1.82%
2. Bengaluru (#1 in 2018) 5.27% 6.08% 5.24%
3. Hyderabad (#2 in 2018) 3.27% 3.76% 3.11%
4. New York (#5 in 2018) 2.26% 2.30% 2.80%
5. Pune (#4 in 2018) 2.20% 2.32% 1.97%
6. London (#3 in 2018) 2.14% 2.50% 2.37%
* San Francisco was in #3 in 2017 and #6 in 2018
* Chicago was in #8 in 2017 and not in top 10 in 2018


This statistic includes access from desktop and mobile. Total top 5 web browsers contribute more than 98% of the visitors. All browsers position stay the same as in 2018, but Internet Explorer gains a higher contribution percentage by visitors.

Top Visitor Web Browser 2019 2018 2017
1. Chrome 83.13% 83.34% 81.33%
2. Internet Explorer 7.05% 5.43% 4.65%
3. Firefox  3.57% 4.64% 6.36%
4. Safari  2.91% 3.28% 5.32%
5. Edge 1.58% 1.71% 1.11%


In terms of the operating system, there is no change in the top 5 web browsers. There is a slight decrease in Windows users. While mobile and tablet users stay the same with close to 6% contribution of the total visitors, this makes sense because most of the visitors access this blog when they have difficulty in configuring Salesforce using the desktop/laptop.

Top Visitor Operating System 2019 2018 2017
1. Windows 74.19% 75.13% 73.36%
2. Macintosh 17.63% 17.12% 17.93%
3. Android 3.28% 3.15% 2.63%
4. iOS  2.57% 2.80% 4.10%
5. Linux 1.65% 1.36% 1.23%


As mentioned in last year's blog, screen resolution 1536x864 probably is 1920x1080 resolution with the display set to 125% zoom, and 1280x720 is 150% zoom of 1920x1080. In total, 1920x1080 is contributing to more than 42%. I have no idea with 1024x768, but some of the contributions probably are tablets and mobile.

Top Visitor Screen Resolution 2019 2018 2017
1. 1920x1080 24.38% 21.87% 20.87%
2. 1366x768 15.62% 20.09% 21.63%
3. 1536x864  8.88% 8.53% 8.12%
4. 1280x720 (#5 in 2018) 8.75% 6.63% 4.63%
5. 1440x900 (#4 in 2018) 7.95% 8.23% 8.44%
6. 1024x768 (#8 in 2018) 6.35% 3.63% n/a


Top 5 Popular Page
For popular pages, 3 of 5 top pages from 2018 still stay in 2019. And, 3 of 5 top pages in 2019 are written in 2018.

2. Salesforce: How to export Attachments? ~ 6,023 hits [2014] (#1~2018; #1~2017; #4~2016; #4~2015; #5~2014)
5. Salesforce: Activity Controlled by Parent ~ 2,970 hits [2015] (#5~2018; #17~2017)


Top 5 Referral Site
1. Salesforce Ben ~ 10 Most Popular Salesforce Admin Blogs
2. Kelsey Shannon ~ EA Certification Study Guide Part 1: Data Layer
3. Kelsey Shannon ~ EA Certification Study Guide Part 5: Security
4. SrinuSFDC ~ Admin 201 Sample Questions 1 - 20
5. Kelsey Shannon ~ EA Certification Study Guide Part 6: Administration



Thursday, January 2, 2020

Einstein Lead Scoring: Getting Started

Einstein looks at your company’s past leads, including any custom fields, to find patterns in your successful lead conversion history. Einstein Lead Scoring then determines which of your current leads fit your success patterns best. Each lead receives a score indicating how well it fits your patterns, along with insights about which of the lead’s fields affect its score most.

If you have Sales Cloud Einstein or High Velocity Sales license, then Einstein Lead Scoring is available for you to use.

menu availability depends on the licenses provisioned


Step to enable and configure Einstein Lead Scoring

1. Add Remote Site Settings
example:
Remote Site Name CS41
Remote Site URL https://cs41.salesforce.com

2. Permission Set
You can use the standard Sales Cloud Einstein or High Velocity Sales User permission set, or create a custom permission set and make sure to enable Use Einstein Lead Scoring permission under App Permissions. Assign the users with the Permission Set.

3. Enable Einstein Lead Scoring
Go to the setup menu and type Assisted Setup, it should be under Einstein Sales. Follow the wizard start with set up for Einstein Lead Scoring.






It may take up to 24 hours for Einstein to score the Leads. Once finished, you will get a notification.



You will find Einstein Scoring component available on the Lead Lightning page.




Reference: Prioritize Leads with Einstein Lead Scoring



Page-level ad