SailPoint IdentityIQ: Batch Request : Bulk Operations

SailPoint IdentityIQ comes with a most desired and important feature of Bulk Operations. This feature is known as Batch Requests which is available under "Setup" menu. This feature allow us to execute bulk operations by passing csv file as input. We can use these Batch Requests with many additional configurable properties like Generation of Access Requests, Stopping execution in case of errors etc. 

Here are the list of operations which are supported by Batch Requests:
  • Create Identity (User)
  • Modify Identity (User)
  • Create Account
  • Delete Account
  • Enable Account
  • Disable Account
  • Unlock Account
  • Assign Role (Add Role)
  • Revoke Role (Remove Role)
  • Provision Entitlement (Add Entitlement)
  • Revoke Entitlement (Remove Entitlement)
  • Change Password

Remember:

  1. For Modify Identity, make sure that fields are editable which you want to modify through Batch Requests, else you will get exception 
  2. Use Pipe ( | ) to assign/revoke multiple roles or entitlements
  3. Following operations with similar data and columns cannot be done through same csv file: 
    • Create Identity
    • Modify Identity
    • Change Password
Examples

Create Identity

operation, name, email, department, costcenter, firstname, lastname, manager, userType, employeeNumber
CreateIdentity, CFRGUSON,  cfrguson@example.com, IAM, 1001, Craig, Frguson, asmith, Contractor, 00025


Modify Identity

operation, identityname, email, department, costcenter, firstname, lastname, manager, userType, employeeNumber
ModifyIdentity, BSmeeth,  bsmeeth@example.com, IAM, 1002, Bob, Smeeth, ssmeeth, Employee, 00022





Disclaimer


All content provided on this blog is for informational purposes only. The owner of this blog makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site. The owner will not be liable for any errors or omissions in this information nor for the availability of this information. The owner will not be liable for any losses, injuries, or damages from the display or use of this information

SailPoint IdentityIQ - Fetch User Role Membership


Here is the sample query to retrieve user - role membership in SailPoinr IdentityIQ:


Select idy.name as "Username", (select idy2.name from spt_identity idy2 where idy.manager=idy2.id) AS "Manager",  idy.extended1 AS "UserType", bun.name AS "Role Name", bun.disabled AS "Status" 
from spt_identity idy, spt_identity_bundles idb, spt_bundle bun
where idy.id = idb.identity_id
and bun.id=idb.bundle Order by idy.name;

Tables Used:
  • User Data: SPT_IDENTITY
  • Role Table: SPT_BUNDLE
  • User-Role Membership: SPT_IDENTITY_BUNDLES
Note: In SailPoint IdentityIQ, Roles are also known as Bundles. 

MySQL - Alias Not Working - Oracle SQL Developer

I have SailPoint IdentityIQ running on MySQL and wanted to use Oracle SQL Developer for general query purpose to see some data. 

Most of the tables in SailPoint database have one column with name "ID" so it is useful to use "Alias" in the select query but somehow alias for column was not working even for simple query.

For example: SELECT  ID as "UserKey" from SPT_IDENTITY;


After doing research on this issue, I found about "useOldAliasMetadataBehavior" parameter. In previous versions of mysql connector, default value for this parameter is set to true but in newer versions, value for this parameter is set as false by default. This was the reason, alias was not working for me. 

As a solution, I had to pass the value for this parameter as true in the JDBC URL. Here's the example of JDBC URL:

jdbc:mysql://IP_Address:Port/identityiq?useOldAliasMetadataBehavior=true
 

Oracle SQL Developer with SailPoint MySQL - Vendor Code 1317

I have SailPoint IdentityIQ running on MySQL and wanted to use Oracle SQL Developer for general query purpose to see some data. 

One first query execution was successful but on the 2nd query execution I was getting below error "Query Execution was Interrupted". I also see the error code as "Vendor Code 1317". 

I tried different other options as suggested over internet but nothing worked for me. 

On research I found that this issue was resolved with MySQL Connector version of 5.0.x but I was getting this error with 5.0.4.

Finally it was resolved by using updated versions of mysql connector i.e. 5.1.48 (mysql-connector-java-5.1.48) or 5.1.29 (mysql-connector-java-5.1.29).

 

THINGS TO REMEMBER FOR IDM MIGRATION




Nowadays every organization implements Identity Management solution to manage access to their assets and resources. To meet the new business needs, organizations either upgrade their existing Identity Management solution or migrate to new solution.

Upgrading the existing solution is comparatively simpler as most of the detailed documents are provided by vendors itself but migration to new solution is more complex because there’s no documentation provided by the vendor and there’s a lot of dependencies on the existing implementation, so it’s very important to plan the migration properly to avoid unnecessary delays and to make the migration successful. Most of the migration projects delay either because of lack of planning & preparation.
Before kicking-off the actual migration, here are the few questions which must be answered:

1.      Do we have up-to-date documentation of existing implementation?
2.      Do we have a document which contains all the dependencies for migration?
3.      Should we go for a big-bang approach or phased approach?
4.      What are the expectations from the new IDM System?


Let’s discuss these questions one by one.

Understanding the existing implementation is the main prerequisite to define the scope & budget of any migration project. Generally, most of the companies don’t maintain up-to-date documentation but organization must have at-least Requirement & Design documents. If any organization doesn’t have the same, they must perform assessment of existing implementation.

IDM system manages access to various systems/application and in every IDM implementation there are always some common artifacts which are shared across multiple applications, so it’s really important to document these kinds of dependencies for each and every application/system. This will help in defining the roadmap for migration.

Big-Bang vs Phased Approach? Sr. Management is interested in this question because they need to plan the budget & resources accordingly. Answer completely depends upon the type of implementation. I have seen organizations which were using IDM as a backend engine for provisioning for couple of applications/systems only and I have also seen organizations which had thousands of applications/systems. Generally, phased approach is recommended because:
  • Reduced Risk: Provides ability to learn and improve from each phase.
  • Flexible Timelines: Easy to accommodate any unplanned item
  • Better Budget Planning: Phased approach t allows us to plan the budget accordingly
  • Same Resources: Big-bang approach requires many resources because each resource will be engaging at the same time but in phased approach same resources can be used for each phase. 
  • Higher Success Rate: As we learn and improve from each phase, this maximizes the success rate 
  • Happy Management: Continuous delivery in each phase keeps the management happy

IDM is a centralized system and replacing the same is a big step for any organization. Additionally, everyone has higher expectations from the new IDM solution as they investing more money & time in a similar solution, so it’s really important to understand the limitations and pain-points with existing system.

Next item which we need to discuss is HOW SHOULD WE MIGRATE?

As part of planning, we need to answer two key questions:
  1. What would be our Primary IDM and Secondary IDM during the migration?
  2. When are we going to make a switch between IDMs like Secondary will become Primary and Primary will become Secondary for each application?

Always remember one core principle for IDM migration (exceptions are there and I will explain):

“ONE AND ONLY ONE IDM SYSTEM MUST BE ALLOWED TO WRITE IN THE TARGET APPLICATION/SYSTEM AND OTHER IDM MUST READ (NOT WRITE) FROM THE SAME TARGET APPLICATION/SYSTEM”

I always get the follow-up question on this principle, why should we follow this principle?

The answer is to maintain auditing which is the core function of any IDM system. If IDM1 is creating accounts in Application1 and at the same time IDM2 also starts creating the accounts in the same system then accounts created by IDM2 will be considered as Rogue Accounts for IDM1 but they are not Rogue Accounts. This can impact existing monitoring & alerting system and will require more effort to explain the same to auditors. This will increase chances for TRUE-NEGATIVE scenarios.

I mentioned above about the exception with the principle, I’ll explain the same with another example. Suppose IDM1 is creating Type A accounts in Application1 & IDM2 is creating Type B accounts in same application; these two types of accounts are represented as different applications in IDM systems which is a common design approach we follow when we have to manage multiple types of accounts in the same application. This approach allows us to create logical boundary during the migration so we can allow two IDM systems to write in the same application. This is not recommended but can be used if it’s difficult to migrate the features of that application together.
Let’s go through the definitions of Primary and Secondary IDM.

  • Primary IDM: IDM which will be responsible for WRITE operations in a particular application
  • Secondary IDM: IDM which will be responsible for only READ (Not Write) operations in the same application.

Note: This is possible that at any given time, both the IDMs can behave as Primary IDMs but for different applications.

Now going back to our questions, we must identify our Primary IDM and Secondary IDM for each application and also define the timelines when are we going to make the switch. Just deploying the artifacts/objects/components into new IDM will not add any value until we turn-off the write operations from existing IDM, so we must define timelines to make this move.


Sometimes we run into situations when 

  •  Some object/artifacts are common between few applications, and
  •  Migration of those applications is not possible at the same time due to increased scope or other applications specific dependencies
To overcome such situations, one must be prepared for making modifications in existing IDM. Sometimes this small effort can make our migration easier. I have seen organization which don’t allow any new development/code change in existing IDM which is not good. Sometimes it is also required to build a bridge between existing IDM and new IDM.  

HOW TO DEFINE PHASES:

Defining the phases with correct scope is really important for migration project. This is where we need the help from dependency document which we discussed above. First phase must contain application(s)/system(s) which have zero dependency or least dependency on others and we need to apply the same formula to define other phases as well.

As a best practice, connectors for authoritative systems should be migrated in the last phase. This can be migrated in the first phase as well but will require additional effort and will add unnecessary complexity. Migrating such connectors in the last phase will have the minimal impact to downstream application(s). 


I feel that I can write for next 2 hours on the same topic 😉 but would like to end here. This is just based on my learning while working in migration projects (Note: Sharing as a recommendation only)

!!! Happy Learning !!!






Disclaimer

All content provided on this blog is for informational purposes only. The owner of this blog makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site. The owner will not be liable for any errors or omissions in this information nor for the availability of this information. The owner will not be liable for any losses, injuries, or damages from the display or use of this information

Role Engineering or Role Mining



Every organization use some model to manage their resources or assets and RBAC is the most famous & most adopted model for the same. RBAC i.e. Role Based Access Control simplifies the management of access to resources and assets. In RBAC model, all the permissions/access are tied to various roles. Controlling the access through roles reduces the administrative burden of security practitioners. Here are the few key benefits of RBAC model:

Key Benefits:

  • ·       Reduces no of access requests & approvals
  • ·       Improve end user experience
  • ·       Enhance security
  • ·       Very helpful in implementing Birth Right Access
  • ·       Improve Productivity
  • ·       Simplifies the User Access Re-validation (Certification)
  • ·       Build the foundation to implement Separation of Duties


Sometimes it becomes difficult to implement RBAC in large organizations as every organization has their own set of resources, permissions, job functions, policies and controls. Identifying the right roles for an organization is a big & an important task and this task of identifying roles is known as Role Engineering. Role Engineering is also known as Role Mining or Role Discovery. Role Engineering is a process to discover relationships between access permissions and users or job functions which can be grouped together to form a role.
.
There are three types of approaches which we use for Role Engineering:
.
  • Top-Down Approach: Roles are defined based on organization business
  • Bottom-Up Approach: Roles are defined to meet specific application or system access
  • Hybrid Approach: Roles are defined using above two approaches

Earlier Role Engineering was a manual process. Data Owners used to export the access data in an excel sheet to analyze and define the roles but nowadays most of the organizations have implemented IDM products like Oracle Identity Analytics, SailPoint IdentityIQ, CA IDM etc. Because organizations are using these IDM products, so these products will have the up-to-date access data which is a must have to perform this Role Mining process. These IDM products come with OOTB Role Mining capabilities, but custom algorithms can be implemented based on organization needs. Here’s the process which is used by these products for defining the roles in any organization:

  • ·       Setting the Role Engineering attributes
  • ·       Creating and Running Role Engineering Process
  • ·       Analyze the Role Engineering Results
  • ·       Configure and Save the Role Definitions
  • ·       Set the metadata for the roles

In the above steps, setting the role engineering attributes is the key steps because that is going to set the base or build the logic for the entire Role Mining process. If we want to perform Role Mining for an application, first we need the access data for that application. Second, we need to define some parameters like Job Title, Job Function, Department, User Type, Manager, Job Level etc. to find/discover the relationship. Once we execute step 1 and 2, IDM products will execute the Role Mining task and share the results.

Role Engineering is not just a one-time task or job, rather it’s an ongoing activity which helps organizations in enabling better control on their resources and assets. It was also proven that defining just high-level roles with basic access/permissions does not deliver expected business benefits. Role Engineering with any approach (Top-Down, Bottom-Up or Hybrid) is a key cornerstone.