Monday, April 18, 2011

Middleware & SOA Services

0 comments
Telkite SOA Vision
Telkite aims to seamlessly integrate business process and the underlying applications to facilitate real time information sharing and create a nimble Enterprise which successfully blends IT reuse and agility. This vision entails a 5 stage process beginning with implementation and development of applications to automate homogenous business processes. Stages 1 and 2 are about Integration of heterogeneous business process using SOA and EAI, while stage 3 is creation self improving and continuously optimizing business processes. Stage 4 entails creation of real time information sharing mechanisms using Business intelligence and Business activity monitoring and finally stage 5 is using Web 2.0 to build enterprise solutions.


Telkite Services
The Middleware & SOA practice offers business process integration solutions to large enterprises which are faced with typical integration and reuse challenges in key operational business processes.

At Telkite we have provided services to customers in their journey from mass production to mass customization. The services are classified into 5 key areas of Integration namely – EDI, EAI, BPM, IDM and SOA Driven Integration. Delivery of these services is aided by the focus and direction of the practice towards business process integration.

Services include:
EAI Servcies
EAI Product Evaluation
Integration Design
Implementation
Migration and Upgrades
Support and Maintenance
Testing

BPM Services
Platform Implementation
Process Modelling
Process Optimization
Support and Maintenance
Testing

IDM Services
Single Sign On
Access Management
Provisioning, User life Cycle Management
Personalization
Workflow
Integration with BPM and Middlware

SOA Services
SOA Roadmap and Consulting
ESB Implementation
SOA Testing
Service Creation
Service Extraction
Service Utilization and Management

SOA – Aligning IT with Business
newer post

Informatica Powercenter up and running in less than 30 minutes and yes it’s free (well free trial at least, with full PDF documentation)!

0 comments

This is very simple and quick tip for everybody who wants to have Informatica Powercenter up and running in very short period of time and does not want to spend a dime. Yes all components are free and you are not breaking and license.

Take this as a very quick demo.

what you need

1. VMWare Player
2. MS Windows Server
3. Oracle Database
4. Informatica Server and Client

Here we go …

First, let's start with VMWare. There are some other free options, I really like WMVare Player. It's quick to install, very easy to use and Unity mode blows every other option away. You can get free VMWare Player here

After you install it, the second step is to download free 30 day evaluation of Windows 2003 server. After you download it, you can easily open it (click OPEN, not NEW instance and open WIN2K3R2EESP2.vmc) using VMWare Player – please note VMWare Player converts it to it's own format (this conversion took around 30 seconds on my PC).

Now start VMWare with Windows Server 2003 (make sure you enable networking, it's by default turned off in VMWare machine). After you login to the server using Administrator/Evaluation1 , download Oracle Database (download it from Windows 2003 server you just installed). The easiest way is to start with Oracle XE – it's super easy to install it and you really don't need to know much about oracle. You can get it here. The installation is painless, the only thing you need to set next to destination directory is password for sys.

After you install it, open web console and either create new user or enable SCOTT user – this user will be used by informatica repository and you need this ID during informatica installation. XE comes with easy to use web interface, it should take couple seconds to create new userid. You can of course use sql*plus

The last component is Informatica Powercenter. I'm not sure if Informatica offers demo version, but Oracle does. Wait, you said Oracle? That's right. Back in the old days when Hyperion was standalone company they offered Informatica Powercenter as part of their Essbase server offer, they called it Hyperion Data Integration Management sometimes called as DIM. I'm a bit surprised Oracle still offer it. From what I can tell it's really full blown Informatica Powercenter 8.1 with few cut-down features – you cannot run in parallel and you cannot use some of the drivers which come with Powercenter offered through Informatica. Download your copy here

The installation is very straightforward, when the setup asks for database location – select

database: ORACLE
host name:  localhost
SID: XE
USERID/PASWORD: whatever you created after you installed Oracle Database

grab your license key here http://licensecodes.oracle.com/hyperion.html

That's pretty much it. The downloads of the software components can take some time (windows and informatica are pretty big), the actual installation is very quick and does not require more than couple reboots (one for VMWare, one for Windows 2003).

small note: it can happen that when you reboot Windows2003 server, Informatica services are not going to start. This is possible that it tries to start before OracleXE starts. The easiest way is to wait for all services to start, then restart OracleXE services and then start Informatica services (both have shortcut in START-PROGRAMS menu).

One additional note about Window vs Linux vs Solaris … Yes I prefer Oracle on Linux or Solaris, but let's be honest, if you are doing it for the first time, you are going to spend quite a lot of time making sure all prerequirements are met. Installing XE on Windows is very quick and requires virtually no special setting.

newer post

Informatica 9, a complete data integration platform

0 comments

In the market for a Data Integration is a leading manufacturer Informatica. This company is the first independent provider of data integration software. His best-known tool and the heart of his platform is Informatica PowerCenter, which has gone through many versions, and is a reference in the world of integration.

But apart from PowerCenter, Informatica also has other tools that focus on more specific purposes, while that are integrated into the platform, and always in the context of Data Integration.

The Informatica platform 9 is designed to cover the full life cycle data integration, which consists of five main steps: access, screening, cleaning, integration and delivery.

Plaataforma de Informatica 9 para cubrir el cliclo completo de vida de la integracion de datos 

As usual with major software vendors, Informatica has many products, options and issues, and may cost a bit to get an idea of what each one, and what we need and what does not for our needs, but the fact that Informatica is an independent provider dedicated exclusively to data integration software makes a big difference.

This article will give a review of the main products that make the Informatica platform in September, grouped by type of problem they solve, and will provide a brief description of each one.


Data Integration
  Informatica PowerCenter
  Informatica PowerExchange
 
Data Quality
  Informatica Data Explorer
  Informatica Data Quality
  Informatica Identity Resolution
 
B2B Data Exchange
  Informatica B2B Data Exchange
  Informatica B2B Data Transformation
 
Lifecycle Management Information
  Informatica Data Archive
  Informatica Data Subset
  Informatica Data Privacy


Data Integration

The Data Integration products are those with a more generic and, to make an analogy, we could compare with ETL tools from other manufacturers.
 

Informatica PowerCenter

 

It might be called the flagship of Informatica. Connect with a multitude of data sources in real time, batch capture or even changes in the data (CDC).

Like other ETL tools, to define and implement on these data the necessary changes and then distribute them to the destination system as appropriate.

PowerCenter I would emphasize ease of use of its visual development tools, efficiency, scalability, expandability and functionality by the purchase of options 'Extra' and integration with other applications of the platform.

There are three product editions, each designed to cover a type of requirements. They are the Standard Edition with the basic options, the Advanced Edition which incorporates more advanced options, and Real Time Edition, which is aimed at the integration of real time data.

 

Informatica 9. Metadata Manager de Powercenter 
 

Informatica PowerExchange

 

This tool allows direct access, process and distribute data that are on platforms that often require intermediate steps to manage them with a standard ETL.

PowerExchange applications can connect to SaaS (Software as a Service), all kinds of databases, Email, LDAP, Web Services, XML, etc.

In an advanced version Complex Data Exchange can even work with complex data formats such as EDI, HL7, SWIFT, EDIFACT, etc.

With some of these platforms can work in real time, or even use the technology CDC (Change Capture), which detects changes to data in a non intrusive and over-burdening the source system with unnecessary queries.

It also integrates with Informatica PowerCenter and Informatica Data Quality.

 

Informatica PowerExchange 
 

Data Quality

It is not necessary to comment much to define this group, all tools designed to give help in improving the quality of data companies are here.
 

Informatica Data Explorer

 

It is the data profiling tool Informatica. Allows easy profiling data at the column level, table and between tables, which he named Informatica Data Analysis in three dimensions that can work on complex data.

From the analysis and profiling are generated metadata, related sources and destinations, and create reports that let you control the whole process of data quality, the anomalies, weaknesses and improvements over time.

It also integrates with Informatica PowerCenter and Informatica DataQuality to view results, create mappings automatically, or display specifications for cleaning and processing of data.

 

Informatica Data Explorer 
 

Informatica Data Quality 

 

Data Quality has a broader scope than Data Explorer, is designed to manage the whole process of data quality, profiling, specification, cleaning, validation and monitoring, allowing him to participate in a coordinated manner both data analysts and developers and administrators . Each one has its own profile-oriented interface, and accessible in a web environment.

Its functionality is the possibility of defining rules and quality services that can be reused in different data quality projects.

It also has predefined rules to cleaning and geocoding addresses for more than 60 countries.

 

Informatica Data Quality 
 

Informatica Identity Resolution

 

The identity resolution software to detect records that appear in the system as different individuals, but by the similarity between values associated with them can be deduced that correspond to the same identity. In other areas this process is called deduplication also customers, and can not miss on a data quality project.

Informatica Identity Resolution combines similarity comparison algorithms in an efficient manner, taking into account any typographical errors, changes in the data and even compares data in both languages and even different alphabets.

Processes can operate in both batch and real-time and has API's that allow detection features include identities other applications.

 

Informatica Identity Resolution 
 

B2B Data Exchange

 

This group encompasses the tools that are used to facilitate data integration with other businesses, with the outside world, where the manner of access to information, standards and protocols change, and where it is vital to ensure the quality of incoming data, and not compromising the security of internal systems.

This product family, consisting of Informatica B2B Data Exchange and Informatica B2B Data Transformation is aimed at the effective exchange of information between enterprises, and offers great flexibility in terms of formats, allowing for both structured data and unstructured.

The tools are integrated with the rest of the Informatica platform, and external data collection also incorporate the necessary security measures that can be integrated seamlessly with internal data

 

Informatica B2B Data Exchange, Informatica B2B Data Transformation

Informatica B2B Data Transformation provides processing functions and data quality easy to use for the passage of external data to internal easy and no programming.

Informatica B2B Data Exchange can also define profiles internally and externally with business partners with which information is exchanged, so that on the same platform can define rules for transactions and expedite the protocol before the start of exchange .

It also manages events, current and historical and enables control of transactions.

 

 

Lifecycle Management Information

The information has a lifecycle, and many data will become obsolete as time passes. Data will also be moved and replicated in different settings, some more critical than others, and efficient management of space is important. Many of the data need to be protected and managed to ensure that they can see them clear the appropriate profiles. It is important to manage the lifecycle of the information, and these are the Informatica tools provided to help with this task.
 

Informatica Data Archive

 

Informatica Data Archive is responsible for managing the archiving, with or without compression, inactive data, so stop using space and resources in major production systems, while still maintaining referential integrity and being accessible through tool and the various interfaces it provides.

Define rules and create metadata for data archiving, and provides direct connectivity to various databases, ERP and CRM systems, and even custom applications.

Another important feature is that by managing the archiving, to analyze and actively manage the data growth.

 

Informatica Data Archive

Informatica Data Subset

 

This application is used to create subsets of data from, for example, the full details of a production environment.

Allows you to define these subsets and policy creation and replication or maintenance of this data from the origin of the 'complete'. The software also is responsible for maintaining referential integrity within the data that form the subsets.

Includes accelerators for use on various ERP's, CRM's, and can greatly facilitate the creation and maintenance of development environments of reduced size, and date.

 

Informatica Data Subset 
 

Informatica Data Privacy

 

In conclusion, this application is responsible for centrally manage data masking that require it within the organization, helping to comply with data protection laws, prevent leaks of sensitive data, and facilitate the creation of environments 100% operational development, but not displaying critical data.

Define masquerade rules, and has different algorithms or ways to implement it, while ensuring consistency and integrity of the data masked. Note that by dissociation of masking allows data values, while maintaining the functionality and readability of the same.

As expected, it also incorporates accelerators applications including masking predefined rules to implement quickly on various ERP and CRM applications.

 

Informatica Data Privacy 
 

newer post

Informatica Services

0 comments
Our Informatica team comprises of Informatica Certified Professionals who have extensive expertise in delivering comprehensive solutions encompassing data warehouse development, implementation, maintenance, operations and support using Informatica Power Center 8.6 suite.


These solutions include:

Rigorous Business requirements gathering and analysis involving end-   users 
Dimensional modeling 
ETL architecture and development 
Data Integration and reconciliation 
Data Quality and meta data management 
Data Mart Development 
Reporting: Custom and OLAP solutions 
Deployment and change management 
Maintenance, Operation, and Support

Some of the key technical implementation features for the above solutions are:

Handling of multi-terabytes of data. 
Working with Heterogeneous data sources
Handling of XML (has become platform independent information   exchange standard in today's business community data) 
Implemented Informatica Web Services. 
Worked on Posting of data using HTTP transformation. 
Migration of legacy systems to today's commercially available platforms. 
Performance optimization includes Grid Architecture implementation,   Partitioning, etc.
newer post

Understanding Master Data Management (MDM) – Part 1

0 comments

Informatica announced its continued partnership with Siperian expanding the link between data integration and master data management. This renewed partnership not only shows how complementary products can be used together, but also how partnership strategies reflect industry trends and vendor and product roadmaps. Additionally, when vendors partner with one another and develop connectors enabling technologies to work together without complicated integration tasks, organizations are the main entities that benefit.

This article is broken out into two parts.  Part 1 provides an overview of what Siperian and Informatica offer in terms of technology and how it overlaps.  The main focus of the piece is how Siperian's partner strategy benefits both Siperian and Informatica customers based on Siperian's perspective. An overview is provided about how both technologies work together and what organizations should be aware of when relying on vendor partnerships. Part 2 identifies Informatica's focus on MDM and how they leverage their partnerships to do so and provides an explanation of how both solutions overlap to provide a full master data management solution.

How it works

Before learning about how customers benefit from vendor partnerships, it is important to understand how using Siperian and Informatica together works.  Below provides an overview of how MDM fits within an overall data integration and migration framework.  Information from various operational systems, databases, and unstructured data sources are cleansed and migrated into a hub where one system of record is created.  This is where a single version of the truth gets created in relation to customer, supplier, product, etc.  Additionally, the hub can be used for vertical applications such as pharmaceuticals, telecommunications, oil and gas, and financial services.

Entity views are based on business agreement of what a customer or supplier or chosen MDM focus means.  Once this information is stored it can be fed back into operational systems to keep information up to date, or provide simple look-ups for CRM related applications so that employees constantly know what is happening with the customer. Overall, the goal is to develop one set of data that can be referenced and used to increase data quality and customer value.


Source: Informatica, 2008

Why partnerships matter

Solution providers build their offerings to solve specific needs within the organization.  In many cases, solutions implemented within an organization are chosen to help solve a problem or address a specific requirement.  Even though integration is always considered when selecting new solutions, choices are rarely made based on how they fit with the overall IT infrastructure.  Consequently, activities surrounding integration may take up the bulk of time required to implement a new system.

Partnerships enable quicker integration.  When two solution providers partner to provide easier integration, they generally develop a set of APIs so that each solution can "speak to each other" without additional integration requirements.  These partnerships are generally chosen based on customer interest.  For Siperian and Informatica, there is a high level of overlap as many of Siperian's customers use Informatica to populate their hubs and migrate data across the organization.

Siperian's focus on two types of partnerships

Siperian focuses on two types of partnerships to enhance their offerings to customers. The first consists of ISVs and data service providers. Vendors within the areas of data modeling, data quality, and data matching boost Siperian's solution offerings to enhance its core set of technologies.  Managing data is essential within MDM and the ability to maintain data quality initiatives and to ensure integration with an organization's current standards (i.e. Informatica, or Trillium) means that Siperian can offer organizations a full data management solution.

The second type of partnership is with System Integrators to enable organizations the ability to implement a full suite based on vertical market.  Siperian is accustomed to handling different entities across 8 verticals, such as pharmaceuticals, financial services, and oil and gas, with hubs focused on areas such as employee, physician, product, customer, etc. The solutions that are built and developed by the System Integrators are brought to market within the various vertical markets.

Combined benefits for Siperian and Informatica customers

Estimates from both Siperian and Informatica state that more than half of Siperian customers also use Informatica.  In addition, Identity Systems' linking and matching technology is embedded within Siperian, meaning that all of Siperian's customers deploy the technology.  With Informatica's recent acquisition of Identity Systems, the relationship between the two partners has become stronger as their customer base continues to overlap. Additionally, because Identity Systems is embedded within Siperian, customers only access one company for support.

In terms of Siperian's hub and Informatica specifically, in many cases, organizations use Informatica PowerCenter to move data into the hub. Once data is in the hub, Identity Systems' matching and linking agent is used to create a single source of truth for the organization. Informatica is then used again to populate any of the downstream systems, including CRM, ERP, data warehousing, reporting, etc.

What partnerships mean for end user organizations – the benefits, the challenges

As described above, partnerships enhance the integration between disparate solutions.  For organizations, this generally means that diverse applications can be easily deployed side by side.  In the case of Siperian and Informatica, there is a focus on tight integration because of their customer overlap and Informatica's newer commitment to MDM with the acquisition of Identity Systems. Unfortunately, other vendors may partner with each other but not place strong emphasis on the actual integration components, meaning that the partnership essentially exists on paper.  Consequently, it is important for organizations to ask additional questions when considering a potential solution provider on partner status alone.

newer post

Sunday, April 10, 2011

DWH Tutorial - 2

0 comments

What are virtual cubes?
These are combinations of one or more real cubes and require no disk space to store them. They store only the definitions and not the data of the referenced source cubes. They are similar to views in relational databases.

What are MOLAP cubes?
MOLAP Cubes:   stands for Multidimensional OLAP. In MOLAP cubes the data aggregations and a copy of the fact data are stored in a multidimensional structure on the Analysis Server computer. It is best when extra storage space is available on the Analysis Server computer and the best query performance is desired. MOLAP local cubes contain all the necessary data for calculating aggregates and can be used offline. MOLAP cubes provide the fastest query response time and performance but require additional storage space for the extra copy of data from the fact table.

What are ROLAP cubes?
ROLAP Cubes:   stands for Relational OLAP. In ROLAP cubes a copy of data from the fact table is not made and the data aggregates are stored in tables in the source relational database. A ROLAP cube is best when there is limited space on the Analysis Server and query performance is not very important. ROLAP local cubes contain the dimensions and cube definitions but aggregates are calculated when they are needed. ROLAP cubes require less storage space than MOLAP and HOLAP cubes.

What are HOLAP cubes?
HOLAP Cubes:   stands for Hybrid OLAP. A ROLAP cube has a combination of the ROLAP and MOLAP cube characteristics. It does not create a copy of the source data however, data aggregations are stored in a multidimensional structure on the Analysis Server computer. HOLAP cubes are best when storage space is limited but faster query responses are needed.

 

What is the approximate size of a data warehouse?

You can estimate the approximate size of a data warehouse made up of only fact and dimension tables by estimating the approximate size of the fact tables and ignoring the sizes of the dimension tables.

 

To estimate the size of the fact table in bytes, multiply the size of a row by the number of rows in the fact table. A more exact estimate would include the data types, indexes, page sizes, etc. An estimate of the number of rows in the fact table is obtained by multiplying the number of transactions per hour by the number of hours in a typical work day and then multiplying the result by the number of days in a year and finally multiplies this result by the number of years of transactions involved. Divide this result by 1024 to convert to kilobytes and by 1024 again to convert to megabytes.

E.g. A data warehouse will store facts about the help provided by a company's product support representatives. The fact table is made of up of a composite key of 7 indexes (int data type) including the primary key. The fact table also contains 1 measure of time (datetime data type) and another measure of duration (int data type). 2000 product incidents are recorded each hour in a relational database. A typical work day is 8 hours and support is provided for every day in the year. What will be approximate size of this data warehouse in 5 years?

 

First calculate the approximate size of a row in bytes (int data type = 4 bytes, datetime data type = 8 bytes):

 

Size of a row = size of all composite indexes (add the size of all indexes) + size of all measures (add the size of all measures).

 

Size of a row (bytes) = (4 * 7) + (8 + 4).

Size of a row (bytes) = 40 bytes.

 

Number of rows in fact table = (number of transactions per hour) * (8

                                                                        Hours) * (365 days in a year).

Number of rows in fact table = (2000 product incidents per hour) * (8

                                                                        Hours) * (365 days in a year).

Number of rows in fact table = 2000 * 8 * 365

Number of rows in fact table = 5840000

Size of fact table (1 year) = (Number of rows in fact table) * (Size of a Row)

Size of fact table (bytes per year) = 5840000 * 40

Size of fact table (bytes per year) = 233600000.

Size of fact table (megabytes per year) = 233600000 / (1024*1024)

Size of fact table (in megabytes for 5 years) = (23360000 * 5) / (1024 *1024)

Size of fact table (megabytes) = 1113.89 MB

Size of fact table (gigabytes) = 1113.89 / 1024

Size of fact table (gigabytes) = 1.089 GB

 

newer post

DWH Tutorial - 1

0 comments

What is OLTP?
OLTP stands for Online Transaction Processing.
OLTP uses normalized tables to quickly record large amounts of transactions while making sure that these updates of data occur in as few places as possible. Consequently OLTP database are designed for recording the daily operations and transactions of a business. E.g. a timecard system that supports a large production environment must record successfully a large number of updates during critical periods like lunch hour, breaks, startup and close of work.

What are dimensions?
Dimensions are categories by which summarized data can be viewed. E.g. a profit summary in a fact table can be viewed by a Time dimension (profit by month, quarter, year), Region dimension (profit by country, state, city), Product dimension (profit for product1, product2).

What are fact tables?
A fact table is a table that contains summarized numerical and historical data (facts) and a multipart index composed of foreign keys from the primary keys of related dimension tables.

What are measures?
Measures are numeric data based on columns in a fact table. They are the primary data which end users are interested in. E.g. a sales fact table may contain a profit measure which represents profit on each sale.

What are aggregations?
Aggregations are precalculated numeric data. By calculating and storing the answers to a query before users ask for it, the query processing time can be reduced. This is key in providing fast query performance in OLAP.

What are cubes?
Cubes are data processing units composed of fact tables and dimensions from the data warehouse. They provide multidimensional views of data, querying and analytical capabilities to clients.

What is the PivotTable® Service?
This is the primary component that connects clients to the Microsoft® SQL Server™ 2000 Analysis Server. It also provides the capability for clients to create local offline cubes using it as an OLAP server. PivotTable® Service does not have a user interface, the clients using its services has to provide its user interface.



What are offline OLAP cubes?
These are OLAP cubes created by clients, end users or third-party applications accessing a data warehouse, relational database or OLAP cube through the Microsoft® PivotTable® Service. E.g. Microsoft® Excel™ is very popular as a client for creating offline local OLAP cubes from relational databases for multidimensional analysis. These cubes have to be maintained and managed by the end users who have to manually refresh their data.

newer post

DWH Tutorials

0 comments

What is a data warehouse?
A data warehouse is a collection of data marts representing historical data from different operations in the company. This data is stored in a structure optimized for querying and data analysis as a data warehouse. Table design, dimensions and organization should be consistent throughout a data warehouse so that reports or queries across the data warehouse are consistent. A data warehouse can also be viewed as a database for historical data from different functions within a company.

What is a data mart?
A data mart is a segment of a data warehouse that can provide data for reporting and analysis on a section, unit, department or operation in the company, e.g. sales, payroll, production. Data marts are sometimes complete individual data warehouses which are usually smaller than the corporate data warehouse.

What are the benefits of data warehousing?
Data warehouses are designed to perform well with aggregate queries running on large amounts of data.

The structure of data warehouses is easier for end users to navigate, understand and query against unlike the relational databases primarily designed to handle lots of transactions.

Data warehouses enable queries that cut across different segments of a company's operation. E.g. production data could be compared against inventory data even if they were originally stored in different databases with different structures.

Queries that would be complex in much normalized databases could be easier to build and maintain in data warehouses, decreasing the workload on transaction systems.

Data warehousing is an efficient way to manage and report on data that is from a variety of sources, non uniform and scattered throughout a company.

Data warehousing is an efficient way to manage demand for lots of information from lots of users.

Data warehousing provides the capability to analyze large amounts of historical data for nuggets of wisdom that can provide an organization with competitive advantage.

What is OLAP?
OLAP stands for Online Analytical Processing.
It uses database tables (fact and dimension tables) to enable multidimensional viewing, analysis and querying of large amounts of data. E.g. OLAP technology could provide management with fast answers to complex queries on their operational data or enable them to analyze their company's historical data for trends and patterns.

newer post

Learning’s Informatica Power Center

0 comments

1.       Assigning the workflow variables value to the mapping variable in a session (in component tab ->pre-session variable assignment info8.6) is not possible in the reusable session. It will not assign value if it is a reusable session. If you want to assign the value like that go for normal session. i.e session created in workflow directly.

2.       If the parameter is a string no need to give the value in the single quote in the parameter file. Even in the mapping also no need to give single quotes for the parameter.

3.       If you have the reusable session and in workflow if u are calling same session with different name and the corresponding mapping contains parameters, then you need to mention the parameters both session names and the corresponding parameters.

4.       If you have mapplet variables /parameters, in the parameter file give the mapplet name as prefix of the mapping parameter(s).

5.       IS_DATE function will handle both date which is 00000000(yyyymmdd) and date which is invalid (19790332 (yyyymmdd)). This function says these are not dates.

6.       $targetname is workflow variable and $var is mapping variable. If you want to assign the $var value to the $targetname and if the $var variable is getting value from the mapping then assign it in the session àcomponentsà  'post session command succeeded' . (informatica 8.6)

7.       $targetprefix is one more mapping variable and if you want to assign value to this from $targetname (should have value) assing it in session àcomponentsà  pre-session command. (informatica 8.6)

8.       XML Reader error (informatica 7.1.2)

READER_1_1_1> HIER_28056 XML Reader: Error [NoGrammarResolver] occurred while parsing:[Error at (file , line 0, char 0 ): An exception occurred! Type:NetAccessorException, Message:Could not connect to the socket for URL 'http://oasissta.caiso.com/mrtu-oasis/xsd/OASISReport.xsd'.]; line number [0]; column number [0]

READER_1_1_1> Wed Mar 04 11:22:26 2009

READER_1_1_1> HIER_28056 XML Reader: Error [NoUseOfxmlnsAsPrefix] occurred while parsing:[Error at (file /home/infopc/informatica/server/SrcFiles/caliso/price/CALISO_PRICE_RAW_RUC_LMP_GRP_RUC_LMP_4934823.xml, line 2, char 272 ): Fatal error encountered during schema scan.]; line number [2]; column number [272]

READER_1_1_1> Wed Mar 04 11:22:26 2009

READER_1_1_1> HIER_28058 XML Reader Error

 

Solution for the above error

 

Set the attribute 'Validate XML source' to 'Do not validate' in session. Below is the path.

Edit sessionàgo to mappings tabàclick on source file (XML) in Sources folderàproperties (right hand side)àValidate XML Source (4th attribute)

 

9.       Dynamic lookup- you can only use the equality operator (=) in the dynamic lookup condition.

 

  1. 10.   Mapplet: In mapplet, the mapplet output transformation should get data from single transformation (like exp etc) not from two transformations. If we connect  two transformations  to single mapplet output transformation it will validate the mapplet.when we call the mapplet in the mapping and try to validate it will give an error ( it will popup error message about the memory.)

newer post
newer post older post Home