Tuesday 24 June 2014

Only Differences Decode and IIF, Router and Filter, Joiner and Lookup, Joiner and Union, Sorter and Aggregator, Sorter and Rank, Connected Lookup and Unconnected Lookup, View and Materialized View


DECODE
IIF
Decode can be used in Select statement
IIF cannot be used in a Select statement.
EXAMPLE:
DECODE(deptname,”SALES”,1,0)
—-If deptname is SALES then return 1 else return zero
EXAMPLE:IIF( GRADE > 1, IIF( GRADE < 2, SAL1, IIF( GRADE < 3, SAL2, IIF( GRADE < 4, SAL3, BONUS))), 0 )
—-If the GRADE less than ONE then return zero else return SAL1 OR SAL2 OR SAL3 OR BONUS
Decode gives better readablity to users
IIF dosenot give good readablity


Router
Filter
Router transformation provides us the facility to capture the rows of data that do not meet any of the conditions to a default output group.
A Filter transformation tests data for one condition and drops the rows of data that do not meet the condition.
Router transformation is a single input and multi output group transformation.
Filter is single input and single output group transformation.
In a router transformation, you can specify more than one filter condition.
Filter transformation, you can specify only one filter condition.
The router transformation does not block input rows and those records that failed the filter condition will be passed to the default group
In a filter transformation there is chance that records get blocked
Router transformation acts like IIF condition in informatica or CASE.. WHEN in database.
Filter transformation works as WHERE clause of SQL .


Joiner:
1) Only “=” operator can be used in join condition
2) Supports normal, full outer, left/right outer join
3) Active transformation
4) No SQL override
5) Connected transformation
6) No dynamic cache
7) Heterogeneous source
Lookup:
1) [=, <, <=, >, >=, !=] operators can be used in join condition
2) Supports left outer join
3) Earlier a Passive transformation, 9 onwards an Active transformation
(Can return more than 1 records in case of multiple match)
4) Supports SQL override
5) Connected/Unconnected
6) Supports dynamic cache update
7) Relational/FlatFile source/target
8) Pipeline Lookup


JOINER
UNION
Using joiner we can remove duplicate rows
Union will not remove duplicate rows
Joiner can be Normal,Right Outer,Left Outer,Full Outer Join
Union is equlivalent to UNION ALL in SQL
In Joiner we have one input group and one output group
In Union we have multiple input groups and one output group.
Joiner implemented by using Joiner Transformation in Informatica.
Union implemented by using Union Transformation in Informatica
Joiner Transformation combines data record horizontally based on a join condition
Union Transformation combines data record vertically from multiple sources
Joiner supports both homegenous and heterogenous
Union also supports hetregenous (different sources)


Aggregator
Sorter
Aggregator transformation performs aggregate calculations, such as averages and sums.
Sorter Transformation is used to sort the data
Aggregator uses aggregate cache  to perform aggregate calculations
Sorter uses Sorter cache to perform the sort operation
Aggregation done by  using aggregate expression with conditional clauses,
Sorting is done based on the sort key and the order of sort key
Aggregator uses Incremental aggregation to perform aggregation calculations incrementally.
There is nothing called incremental aggregation in sorter


Sorter
Rank
Sorter is used to Sort the data either ASC or DSC
Rank is used to arrange data from top or bottom Group by can be done using Rank
Sorter can be used to remove duplicates(Use Distinct ouptut)
We can remove duplicates using Rank
In sorter we cannot assign values to the ports
In rank we can assign variables and write non-aggregate expressions also.
In sorter the Integration Service uses Sorter Cache to perform the sort operation.
In Rank the Integration Service stores group information in an index cache and row data in a data cache.


Connected Lookup
Unconnected Lookup
This is connected to pipleline and receives the input values from pipleline.
Which is not connected to pipeline and receives input values from the result of a: LKP expression in another transformation via arguments.
We cannot use this lookup more than once in a mapping.
We can use this transformation more than once within the mapping
We can return multiple columns from the same row.
Designate one return port (R), returns one column from each row.
We can configure to use dynamic cache.
We cannot configure to use dynamic cache.
Pass multiple output values to another transformation. Link lookup/output ports to another transformation.
Pass one output value to another transformation. The lookup/output/return port passes the value to the transformation calling: LKP expression.
Use a dynamic or static cache
Use a static cache
Supports user defined default values.
Does not support user defined default values.
Cache includes the lookup source column in the lookup condition and the lookup source columns that are output ports.
Cache includes all lookup/output ports in the lookup condition and the lookup/return port.

The difference betweenthe primary key and surrogate key:

Primary Key: Primary key is a constraint on a column or set of columns in a table. A table can have only one primary key. Primary key ensures that the column(s) are unique and does not allow any NULL values in the column(s). By default, the primary key creates a unique index on the column(s). Primary key is defined on an existing column in the table.

Surrogate key: A surrogate key has multiple names. You can call a surrogate key as artificial key, integer key, non-natural key, synthetic key and so on. Simply put, surrogate keys are integers that are assigned sequentially as needed to populate a dimension table. When creating a table, an additional column is created on the table and a primary key constraint is defined upon that. This new column is populated with sequence values. This is called the surrogate key.


When we issue the STOP command on the executing session task, the Integration Service stops reading data from source. It continues processing, writing and committing the data to targets. If the Integration Service cannot finish processing and committing data, we can issue the abort command.
In contrast ABORT command has a timeout period of 60 seconds. If the Integration Service cannot finish processing and committing data within the timeout period, it kills the DTM process and terminates the session.


View
Materialized view
A view has a logical existence. It does not contain data.
A materialized view has a physical existence.
Its not a database object.
It is a database object.
We cannot perform DML operation on view.
We can perform DML operation on materialized view.
When we do select * from view it will fetch the data from base table.
When we do select * from materialized view it will fetch the data from materialized view.
In view we cannot schedule to refresh.
In materialized view we can schedule to refresh.

We can keep aggregated data into materialized view. Materialized view can be created based on multiple tables.


          Drop
            Delete
           Truncate
1. Whole structure and data will drop.
2. Can’t  Rollback
3. Can’t use where clause
4. All data and structure drop simultaneously.
5. Drop is a DDL Statement
1. Structure will not drop, data will drop.
2. Can Rollback
3. Can use WHERE clause
4. Row by Row data Delete

5. Delete is a DML Statement
1. Structure will not drop data will drop.
2. Can’t Rollback
3. Can’t use WHERE clause
4. Only all data will drop, truncate operation drop and recreate the table.
5. Truncate is a DDL Statement

·          
                       Row id
                           RowNum
1. Physical address of the rows.
2. Rowid is permanent
3. Rowid is 16-bit hexadecimal
4. Rowid gives address of rows or records
5. Rowid is automatically generated unique id of a row and it is generated at the time of insertion of row.
6. ROWID is the fastest means of accessing data.
7. They are unique identifiers for the any row in a table.
1. Rownum is the sequential number, allocated to each returned row during query execution.
2. Rownum is temporary.
3. Rownum is numeric
4. Rownum gives count of records
5. Rownum is a dynamic value automatically retrieved along with select statement output.
6. It represents the sequential order in which Oracle has retrieved the row.


Stored Procedure
Functions
Stored procedure may or may not return values.
Function should return at least one output parameter. Can return more than one parameter using OUT argument.
Stored procedure can be used to solve the business logic.
Function can be used to calculations
Stored procedure is a pre-compiled statement.
But function is not a pre-compiled statement.
Stored procedure accepts more than one argument.
Whereas function does not accept arguments.
Stored procedures are mainly used to process the tasks.
Functions are mainly used to compute values
Cannot be invoked from SQL statements. E.g. SELECT
Can be invoked form SQL statements e.g. SELECT
Can affect the state of database using commit.
Cannot affect the state of database.
Stored as a pseudo-code in database i.e. compiled form.
Parsed and compiled at runtime.


Trigger
Stored Procedure
In trigger no need to execute manually. Triggers will be fired automatically.
Where as in procedure we need to execute manually.
Triggers that run implicitly when an INSERT, UPDATE, or DELETE statement is issued against the associated table.


Specifies a search condition for a group or an aggregate. HAVING can be used only with the SELECT statement. HAVING is typically used in a GROUP BY clause. When GROUP BY is not used, HAVING behaves like a WHERE clause. Having Clause is basically used only with the GROUP BY function in a query. WHERE Clause is applied to each row before they are part of the GROUP BY function in a query.

Both primary key and unique enforce uniqueness of the column on which they are defined. But by default primary key creates a clustered index on the column, where are unique creates a non clustered index by default. Another major difference is that, primary key doesn’t allow NULLs, but unique key allows one NULL only.