Wednesday, May 9, 2012

Webinar: Informatica Cloud Spring 2012 Release 10am Pacific

0 comments
We are excited to announce Informatica Cloud Spring 2012. All customers were upgraded on April 21st. With this release we have introduced major enhancements throughout the cloud service, such as the ability to migrate entire task flows between development and production instances of Informatica Cloud. We have also announced a new Developer Edition, is currently available in an early access program for partners. The Developer Edition will introduce the following:
  • A cloud connectivity API to help you build connectors to a variety of applications rapidly and make them available through the Informatica Marketplace.
  • Cloud integration templates that help you rapidly deploy commonly used integration processes between applications.
Join us for this interactive webinar to get a sneak peak at the new Developer Edition functionality and get an in-depth overview of the Spring 2012 release.
Speakers:
  • Darren Cunningham, Informatica Cloud Marketing
  • Ron Lunasin, Informatica Cloud Product Management
Who should attend?
  • Informatica Cloud customers and prospective customers
  • Informatica Cloud partners and prospective partners
  • Anyone interested in the future of cloud integration
newer post

Webinar: Integrating Salesforce and SAP 10am Pacific

0 comments
Are you trying to make your enterprise more social by integrating SAP and Salesforce but facing integration projects that are too complex and lengthy? Are you losing valuable time manually moving SAP data to and from Salesforce? Is inconsistent data making it impossible to track and report on vital information?
If so you'll want to be sure to attend this educational webinar. You’ll learn:
  • The best practices and techniques to integrate Salesforce and SAP
  • A proven and flexible approach to migrating and synchronizing data between both systems
  • How quickly and effectively Informatica Cloud can integrate your Salesforce and SAP data
About Informatica Cloud:
Informatica Cloud delivers data integration as a service solutions and has been recognized four years in a row as the #1 application for Salesforce.com customers on the AppExchange.
newer post

Informatica Adds Developer Edition to Cloud Data-integration Platform

0 comments
Informatica has added a new developer edition to its cloud-based data-integration platform in a bid to further expand a partner ecosystem around the service, the company announced Monday.

Now available through an early-access program, Informatica Cloud Developer Edition provides systems integrators and software vendors with a Java-based programming interface for creating connectors to various cloud services.
The connectors can have "complete native connectivity" to data objects within an application and can also be easily packaged for sale through Informatica's marketplace, according to the company's website.
Also featured in the developer edition are Cloud Integration Templates, a library of pre-built workflows for common data-integration scenarios. The basic templates can be tweaked as desired, according to Informatica. A REST (representational state transfer) API can be used to embed the templates natively into a cloud application.
Developer Edition was announced in conjunction with the general availability of Informatica Cloud's Spring 2012 edition.
The release include easier ways to move cloud integration objects back and forth from development and product environments, as well as Informatica Cloud instances. The update also adds support for using version 24 of Salesforce.com's Web Services API (application programming interface), Informatica said.
Cloud data-integration technology is becoming more and more important as customers adopt on-demand services and wish to tie them back to on-premises systems, as well as to other cloud software.
Informatica Cloud competes with rivals such as IBM's Cast Iron offering and Dell's Boomi platform, as well as open-source offerings such as Talend.
Pricing starts at US$1,000 per month for Informatica Cloud Professional Edition, with Basic, Standard and Enterprise versions available at higher cost. The company also offers a number of Express editions with more limited feature sets.
newer post

Some Thoughts about Data Proximity for Big Data Calculations – Part 2

0 comments
Treating Big Data Performance Woes with the Data Replication Cure Blog Series – Part 2
In my last posting, I suggested that the primary bottleneck for performance computing of any type, including big data applications, is the latency associated with getting data from where it is to where it needs to be. If the presumptive big data analytics platform/programming model is Hadoop, (which is also often presumed to provide in-memory analytics), though, there are three key issues:
1)     Getting the massive amounts of data into the Hadoop file system and memory from where those data sets originate,
2)     Getting results out of the Hadoop system to where the results need to be, and
3)     Moving data around within the Hadoop application.
That third item could use a little further investigation. Hadoop is built as an open source implementation of Google’s Map Reduce, a model in which computation is allocated across multiple processing units in a two-phased manner. During the first phase, “Map,” each processing node is allocated a chunk of the data for analysis; the interim results are cached locally within each processing node. For example, if the task were to count the number of occurrences of company names in a collection of social network streams, then a bucket for each company would be created at each node to hold the count of occurrences accumulated from each stream subset.
During the second phase, “Reduce,” the interim results at each node are then combined across the network. If there were very few buckets altogether, this would not be a big deal. However, if there are many, many buckets (which we might presume due to the “bigness” of the data), the reduce phase might incur a significant amount of communication – yet another example of a potential bottleneck.
This theme is not limited to Hadoop applications. Even just looking at analytical appliances used for traditional business intelligence queries, there is a general thought out there that because data resides within the environment in a way that is supposed to meet the demands of mixed workload processing, that the operational data is generally going to be in the same locations where the analytical engine is.  And if you consider those commonplace queries that are used for regularly-generated reports, this knowledge aforethought can be put to good use in a data distribution scheme.
However, not all queries are the same old canned queries over and over again, and in many more sophisticated cases, ad hoc queries with multiple join conditions are going to require that those attributes used for the join conditions be moved from their original allocation to be communicated to all the nodes computing the join!  In essence, my opinion is that the idea that the data is where it needs to be is fundamentally flawed, since there is no way that the mixed workload can use the same data resources in their original places except in extremely controlled circumstances in which all the queries are known ahead of time.
So unless we have some additional set of strategies, we are going to still be at the mercy of the network. And as the volumes of data grow so will the bottlenecks… More next week. I will discuss this topic more on May 23 for the Information-Management.com EspressoShot webinar, Treating Big Data Performance Woes with the Data Replication Cure.
newer post

Do You Know Where Your Existing Database Security Solutions Are Failing?

0 comments
Recently, Oracle announced that its latest April critical patch update does not address the TNS Poison vulnerability uncovered by a researcher 4 years ago. In addition to this vulnerability from an attacker, organizations face data breaches from internal negligence and insiders. In a May 2012 survey by the Ponemon Institute, 50% say sensitive data contained in databases and applications has been compromised or stolen by malicious insiders such as privileged users. On top of that 68% find it difficult to restrict user access to sensitive information in IT and business environments.
While databases offer basic security features that can be programmed and configured to protect data, it may not be enough and may not scale with your growing organizations. The problem stems from the fact that application development and DBA teams need to have a solid understanding of database vendor specific offerings in order to ensure that the security feature has been properly set up and deployed. If your organization has a number of different databases (Oracle, DB2, Microsoft SQL Server) and that number is growing, it can be costly to maintain all the database specific solutions. Many Informatica customers have faced this problem and looked to Informatica to provide a complete, end-to-end solution that addresses database security on an enterprise-wide level.
Come talk to us at Informatica World and hear from our customers about how they’ve used Informatica to minimize the risk of breaches across a number of use cases including:
- Test data management
- Production support in off-shore projects
- Dynamically protecting PII or PHI data for research portals
- Dynamically protecting data in cross-border applications
At Informatica, you can meet us in our sessions:
10:10 – 11:10 – Ensuring Data Privacy for Warehouses and Applications with Informatica Data Masking in Room Juniper 3
11:20 – 12:20 – Protecting Sensitive Data Using Informatica’s Test Data Management Solution in Room Starvine 12
Also come to the Informatica Data Privacy booth and lab for in depth demonstrations and presentations of our data privacy solutions and customer deployments.
newer post

informatica

0 comments
 Informatica is a widely used ETL tool for extracting the source data and loading it into the target after applying the required transformation. In the following section, we will try to explain the usage of Informatica in the Data Warehouse environment with an example. Here we are not going into the details of data warehouse design and this tutorial simply provides the overview about how INFORMATICA can be used as an ETL tool

Note: The exchanges/companies that are explained here is for illustrative purpose only.
Bombay Stock Exchange (BSE) and National Stock Exchange (NSE) are two major stock exchanges in India in which the shares of ABC Corporation and XYZ Private Limited are traded between Mondays through Friday except Holidays.  Assume that a software company “KLXY Limited” has taken the project to integrate the data between two exchanges BSE and NSE.

In order to complete this task of integrating the Raw data received  from NSE & BSE, KLXY Limited allots responsibilities to Data  Modelers, DBAs and ETL Developers. During this entire ETL process,  many IT professionals may involve, but we are highlighting the  roles of these three personals only for easy understanding and  better clarity.

Data Modelers analyze the data from these two sources(Record  Layout 1 & Record Layout 2), design Data Models, and then generate  scripts to create necessary tables and the corresponding records.

DBAs create the databases and tables based on the scripts  generated by the data modelers.

ETL developers map the extracted data from source systems and  load it to target systems after applying the required  transformations.

 The complete process of data transformation from external sources to our target data warehouse is explained using the following sections. Each section will be explained in detail.

newer post
newer post older post Home