I am writing this blog, to give overview on how I implemented SSO to JIRA using PingFederate Federation Server. The end client LDAP was already on PingFederate that is why they wanted JIRA SSO to integrate using the same PingFederate.
When I stared work, following were the Workflow and certain assumptions:
User exists in Active Directory
User are Authenticated using Ping Identity
You have the agent-config.txt file. You get this file when you set up adapter for JIAR In PF server.
Users are currently able to login to JIRA when the same username exists in JIRA.
SSO has been achieved through the TokenJiraAuth class which extends JiraSeraphAuthenticator
SSO with OpentokenJiraAuth, when users are manually added to JIRA or already exits.
Ping Identity provides information from AD about the User to OpentokenJiraAuth.
OpentokenJiraAuth only uses username and session to validate the user
When a user logs into JIRA through Ping Identity SSO, the OpentokenJiraAuth should check the JIRA User database to see if the username provided by Opentoken already exits
If the username does not exist, the User record is inserted with username, real name, and email
This all happens before the user is redirected to the JIRA homepage.
Steps to implement the SSO:
1- Copy the following files to the atlassian-jira/web-inf/lib
opentoken-agent-2.4.jar (Other library needed)
commons-beanutils.jar
commons-collections-3.2.jar
log4j.jar
2- Now we will implement our SSO class which will extend the JiraSeraphAuthenticator
package com.pingidentity.opentoken.jira;
public final class TokenJiraAuth extends JiraSeraphAuthenticator
{
private static final long serialVersionUID = 3452011252741183166L;
private AgentConfiguration agentConfig;
public Principal getUser(HttpServletRequest request, HttpServletResponse response)
{
Principal user = null;
String agentConfigLocation = "/agent-config.txt";
try
{
InputStream agentConfigStream;
InputStream agentConfigStream;
if (agentConfigLocation.startsWith("classpath:"))
{
agentConfigLocation = agentConfigLocation.substring(10);
agentConfigStream = getClass().getResourceAsStream(agentConfigLocation);
}
else
{
agentConfigStream = new FileInputStream(agentConfigLocation);
}
this.agentConfig = new AgentConfiguration(agentConfigStream);
String strTokenName = this.agentConfig.getTokenName();
Agent otkAgent = new Agent(this.agentConfig);
request.getSession(true);
if ((request.getSession() != null) && (request.getSession().getAttribute("seraph_defaultauthenticator_user") != null))
{
user = (Principal)request.getSession().getAttribute("seraph_defaultauthenticator_user");
}
else
{
String strOTKParam = request.getParameter(strTokenName);
if (strOTKParam != null)
{
Map userInfo = otkAgent.readToken(request);
if (userInfo != null)
{
String strSubject = (String)userInfo.get("subject");
if (strSubject != null) {
try
{
user = getUser(strSubject);
request.getSession().setAttribute("seraph_defaultauthenticator_user", user);
request.getSession().setAttribute("seraph_defaultauthenticator_logged_out_user", null);
System.out.println("All set");
}
catch (Exception ex)
{
System.out.println(ex.getMessage());
return null;
}
}
else {
return null;
}
}
else
{
return null;
}
}
else
{
return null;
}
}
}
catch (TokenException e)
{
System.out.println("Token Error is " + e.getMessage());
e.printStackTrace();
}
catch (FileNotFoundException eFile)
{
System.out.println("File Not Found Exception. Error is " + eFile.getMessage());
eFile.printStackTrace();
}
catch (SecurityException eSecurity)
{
System.out.println("Security Exception. Error is " + eSecurity.getMessage());
eSecurity.printStackTrace();
}
catch (IOException e)
{
System.out.println("Unable to load OpenToken agent configuration file (" + agentConfigLocation + "). Error: " + e.getMessage());
}
return user;
}
Compile the class & put at exact package path inside atlassian-jira/web-inf/classes
3- put the agent-config.txt file at the same location “atlassian-jira/web-inf/classes”
4- GO to path atlassian-jira\WEB-INF\classes\ and edit file “seraph-config.xml”
Comment the yellow highlighted entry and put the new authenticator.
<!– CROWD:START – If enabling Crowd SSO integration uncomment the following SSOSeraphAuthenticator and comment out the JiraSeraphAuthenticator below –>
<!– –>
<authenticator class=”com.pingidentity.opentoken.jira.TokenJiraAuth”/>
<!– CROWD:END –>
<!– CROWD:START – The authenticator below here will need to be commented out for Crowd SSO integration –>
<!–
<authenticator class=”com.atlassian.jira.security.login.JiraSeraphAuthenticator”/>
–>
<!– CROWD:END –>
Restart the Jira service .Check for the existing JIRA user.
Once user login to the PF Adapter URL and if user exists in the JIRA, then user will be redirecetd to JIRA dashboard. In addition, you can always write your own logic to create user on the fly in the TokenJiraAuth.java file.
Hope this article will help you, if you looking to integrate SSO to JIRA. This article should also give you direction, even with OneLogin SSO integertaion or any other type.
Cheers!
That is all for this article, in case you need Salesforce Implementation Services for any Salesforce related work, then please feel free to reach out to sales@girikon.com
Today I would like to share a story with my audience. This is regarding one of challenges which once I faced in my technical career, well the hurdle was not Mount Everest peak ! but was not less than it for me.
Now I explain what was the task. One of my clients wanted me to write a program where user can do drawing. It was easy using HTML5 canvas and we all knows that canvas is very common now a days but he added some more points, he wanted to register hand drawing with time and options, and then save each draw with the record (pixels) in a database table i.e. whole the process of drawing like a video of various image frames.
So my breakpoints for application were:
Draw image with options
Record & Save it with pixels and time
Replay option (single and multiple together of same options)
Replay should be with same time frame as it was drawn
There are a lot of tools available which provides the facility of drawing (for images, sketches etc.) but no facility to save with pixel & time (some tools may be providing now but at that time none) also most of the tools were desktop/window application.
So I need to develop my own application as client need was web application but the biggest challenge was to save drawing with time and pixels i.e. if user pauses for few moments while sketching and then continue, it should be recorded in the same manner and when user replay the drawing it should play like a recorded video.
For the ease of user convenience I decided to using JQuery for all client side events and AJAX to save data and replay drawing. I opted .Net as platform, C# as server language, SQL Server as Database and HTML5 for canvas. I interacted with database via .Net web Services using AJAX.
And now was able to save drawing with time and options as well as replay.
and the best part is that was also able to play multiple drawing of same options together.
I am sure that it gave you some idea for approach if you encounter with similar problem. Use client side script to not to refresh page at every event and for awesome user experience and web services for server interaction.
Cheers!
Pramod
In case you need Certified Salesforce Consultant for any Salesforce related work, then please feel free to reach out to sales@girikon.com
Main point: Billions of rows X millions of columns
Key Features:
Modeled after Google’s BigTable
Uses Hadoop’s HDFS as storage
Map/reduce with Hadoop
Query predicate push down via server side scan and get filters
Optimizations for real time queries
A high performance Thrift gateway
HTTP supports XML, Protobuf, and binary
Jruby-based (JIRB) shell
Rolling restart for configuration changes and minor upgrades
Random access performance is like MySQL
A cluster consists of several different types of nodes
Best used: Hadoop is probably still the best way to run Map/Reduce jobs on huge datasets. Best if you use the Hadoop/HDFS stack already.
Examples: Search engines. Analysing log data. Any place where scanning huge, two-dimensional join-less tables are a requirement.
Regards,
Alok
In case you need Certified Salesforce Consultant for any Salesforce related work, then please feel free to reach out to sales@girikon.com
Ideally Big Data is term for large and huge amount of data. Now a day’s data piles are growing exponentially. This data is coming from various sources like call logs, web logs, digital transactions, social media posts, sensors data & log, pictures, videos and everything which is digital is contributing.
Whereas Big Data doesn’t specifically indicates to any size or quantity, yet it is referred when we talk for data of petabytes and exabytes. Now Big Data is an evolving & popular term and in the current age main challenge with this plenty amount of data is how to manage it and how to get productive information from here.
There are three prime factors of Big Data:
1. Volume : Analytics on massive amount of data
2. Velocity : Faster & robust transactions with uninterrupted availability
3. Variety : Wide variety of data from different scenarios.
Where our traditional techniques are inadequate to process high volume of data , Big Data makes your business more agile, flexible and swift and to convert potential data into useful information. Dealing with larger datasets it help us to manage both structured semi-structured or un-structured data. Because traditional applications or databases takes too much time to load voluminous data and obviously costs too much, new approaches use complex algorithms for the same thing which reduces time and cost both. In such mechanism main focus is on mining for information rather than emphasizing on data schema and data quality.
Following are few references of that technologies which born to handle this buzzword “Big Data”.
Cassandra DB,
MongoDB,
HBase,
ElasticSearch,
Apache Cassandra etc
Cheers!
Pramod
In case you need Certified Salesforce Consultant for any Salesforce related work, then please feel free to reach out to sales@girikon.com
Orchard CMS is a free & open source project with reusability of components to develop ASP.NET applications. We can enable/install/download shared components to building ASP.NET applications. It helps us to develop own customized and content-centric ASP.NET applications availing the facility of existing modules and features
Latest Orchard version is 1.8.1 with the facility of that Orchard can be deployed to both Windows Azure Cloud Services and Windows Azure Web Sites. The beneficial aspects while developing site using Orchard CMS are:
1. Theme Selection option : Provided default theme is very flexible which we can modify for CSS & design as per our requirements. And if we want to apply new theme to get a new look & feel from scratch, then it also can be done here and in admin desired theme can be set as current.
2. Orchard Search: Orchard CMS does “Search” by keyword or query syntax for text or phrase. To get search feature, we need to enable “Search”, “Indexing” & “Lucene” modules from admin panel. Then create index, attach it to content type on which content we want to make search. Orchard query the index to get content items to display.
3. Orchard Tags Module : Orchard.Tags module. Using this module it’s very easy to save tags with custom items & display the list of items along under their specific tag(s) as well as the search on those tags.
4. Orchard Blogs & Blog comments :
Now a day’s almost every Content Management System has some specific features like Pages, Blogs and Blog Posts and same as in Orchard.
Blogs are by default enable in Orchard and can’t set disable
but we can enable and disable comments for blog, also can put conditions e.g. whether we want to allow threaded comments or not.
Comments are what make a blog/website more interactive and more social.
5. Voting & Stars : Rating is the feature by which we can facilitate user to vote our site content. To make it working first of all we need to enable “Voting” module and then “Stars” module.
Now we can use stars for to vote. We need to edit content type to add the Stars part to it. “Stars” module will show 5 stars to vote by click any of those. Voting will calculate and store values automatically.
Finally, the best part of Orchard CMS is that we can enhance/customize it as per our requirement e.g. Can develop new module, create Widgets/Taxonomies/Workflows/Indexes etc.
Cheers!
Pramod
That is all for this article, in case you need Salesforce Implementation Services for any Salesforce related work, then please feel free to reach out to sales@girikon.com
Main Part: Store huge datasets in “almost” SQL
Key Features of Cassandra DB:
Querying by key, or key range (secondary indices are also available)
Data can have expiration (set on INSERT)
Writes can be much faster than reads (when reads are disk-bound)
Map/reduce possible with Apache Hadoop
All nodes are similar, as opposed to Hadoop/HBase
Very good and reliable cross-datacenter replication
Distributed counter data type
You can write triggers in Java
Best use: When you need to store data so huge that it doesn’t fit on server, but still want a friendly familiar interface to it.
Examples: Web analytics, to count hits by hour, by browser, by IP, etc. Transaction logging. Data collection from huge sensor arrays.
Regards,
Alok