Sunday, November 25, 2012

psql client can be a chess board now...

Yes, pgChess extension now allows you to play a chess game with Postgres Instance(computer). Thanks to authors. Installation is very simple with pgxnclient. pgChess extension supported with another extension called pg2podg.
Documentation Link:
https://github.com/gciolli/pgChess/blob/master/doc/
pgxnclient install pgchess
pgxnclient install pg2podg

After installing, just run CREATE EXTENSION commands
 
create extension pgchess;
create extension pg2podg;
Two types of play, (1) PC vs PC and (2) PC vs Human.
To play a new game of PC vs Human run below file (download source from the above link for the files) in unaligned format of psql terminal.
postgres=# \pset format unaligned
postgres=# \i /usr/local/src/pgchess/pgchess-0.1.7/test/sql/play/PG_v_Human.sql
For your move follow below syntax when prompted "Your Move ? ".
        -- For now, we use this syntax:
        --
        --   XXpYYz
        --
        -- where XX is the starting square, YY is the ending square, p
        -- is the piece and z is an optional square where the desired
        -- promotion is specified.
Enjoy….

--Raghav

Sunday, October 7, 2012

8 New Oracle compatibility features in EnterpriseDB PPAS 9.2 Beta

Some of new Oracle feature included in PostgresPlus Advanced Server 9.2 beta. Most of them are charm to the product and needed if anyone has migrated from Oracle to PostgreSQL. Many more new features in the product itself, however I would like to walk-through few of them which are related to Oracle compatibility.

1. Object Type Support: In this version, member functions are supported, but not yet MAP member functions.
create or replace type mf_test as object
(
    name varchar2(30),
     member function disp return varchar2
);
create or replace type body mf_test as
    member function disp return varchar2 is
     begin
           return 'Name : '||name;
     end;
end;
edb=# declare
edb-#    v_mf mf_test;
edb$# begin
edb$#    v_mf := mf_test('Raghavendra');
edb$#    dbms_output.put_line(v_mf.disp());
edb$# end;
Name : Raghavendra

EDB-SPL Procedure successfully completed
2. Pl/SQL sub types : Now we can define own subtypes in the declarative part of any PL/SQL block, subprogram, or package.
Syntax: SUBTYPE subtype_name IS base_type[(constraint)] [NOT NULL];

declare
   subtype hiredate is date not null;
   pfdate hiredate := sysdate;
begin
   dbms_output.put_line(pfdate);
end;
06-OCT-12 19:53:44

EDB-SPL Procedure successfully completed
3. DROP TABLE CASCADE CONSTRAINTS: This option will delete all foriegn key constraints that referencing the table to be dropped, and then drops the table.
edb=# create table master(id int primary key);
edb=# create table master1(id int references master(id));
edb=# \d master1
 Table "enterprisedb.master1"
 Column |  Type   | Modifiers
--------+---------+-----------
 id     | integer |
Foreign-key constraints:
    "master1_id_fkey" FOREIGN KEY (id) REFERENCES master(id)

edb=# drop table master cascade constraints;
NOTICE:  drop cascades to constraint master1_id_fkey on table master1
DROP TABLE
edb=# \d master1
 Table "enterprisedb.master1"
 Column |  Type   | Modifiers
--------+---------+-----------
 id     | integer |
4. TYPE's in Package definition:
create or replace package t_pack as 
    type ftype is record(name varchar2(20)); 
end;
edb=# call t_pack.ftype('EDB');
 ftype
-------
 (EDB)
(1 row)
5. TABLE() function call on Nested tables: A TABLE() enables to query a collection in the FROM clause like a table.
CREATE OR REPLACE TYPE string_a IS TABLE OF VARCHAR2(765);
select * from table(string_a('abc','xyz')) ;
column_value
--------------
abc
xyz
(2 rows)
6. IN/OUT of UDT in function call: The functions udtabletype_in and udtabletype_out are now supported for nested tables.

7. Usage on reserved Keywords(LOG/CURRENT_DATE): Now LOG word can be used for naming the function. Also CURRENT_DATE, can be used for variable names.
edb=# create or replace function log(t text) return text as 
begin 
   return t;
end;

edb=# select log('EDB');
log
-----
 EDB
(1 row)

edb=# declare
edb-# current_date date := '07-OCT-2012';
edb$# begin
edb$#    dbms_output.put_line(current_date);
edb$# end;
07-OCT-12 00:00:00

EDB-SPL Procedure successfully completed
8. STRING /NVARCHAR2 datatype support: Now STRING datatype and NVARCHAR2 for multi-byte data is supported for table column. String aliased to VARCHAR2 and NVARCHAR2 maps to varchar of PPAS data-types.
edb=# create table dtype( a string, b nvarchar2);
CREATE TABLE
edb=# \d dtype
       Table "enterprisedb.dtype"
 Column |       Type        | Modifiers
--------+-------------------+-----------
 a      | character varying |
 b      | character varying |
Download & release notes link:
http://www.enterprisedb.com/products-services-training/products/postgres-plus-advanced-server/downloads

--Raghav

Wednesday, October 3, 2012

PAGER setting in windows for psql client


psql is a great tool. Oftentimes, output of the particular command in psql is too big to fit on one screen, hence outputs are wrapped to fit in one screen which becomes output unreadable. To custommize command output in readable format we have system variable called PAGER. If its on linux, I can simply set PAGER variable with LESS command + options to view command ouptut in readable format.
Eg:-
export PAGER='less -RSX'

Whereas on windows, you can customize psql output like linux, but with the help of windows tool less.exe which is similar to linux. Below, I copied download link of less.exe, download the binaries which are in zip format and unzip to your desired location.
http://gnuwin32.sourceforge.net/packages/less.htm

Now, set the windows system variable PAGER and VALUE pointing to the less.exe location. Also, include less.exe location in "Path".  After setting system variable, your psql terminal has all sort of functionality like LESS command in linux. "less.exe -F <filename>", behaves as "tail -f" in linux. Many more feature in less.exe which I haven't tried but you can give a try and make your psql output eye-catching :).

PAGER setting screenshot:


--Raghav

Saturday, September 29, 2012

PostgreSQL Processes Names in Windows


If you are from linux then its simple to see all processes names (writer process, wal writer, autovacuum etc.,) just by typing 'ps -ef | grep postgres', but if you are from windows then its bit tough to see without any help of windows system internal tools. Process Explorer a windows system internal tool will brief about active/running process including names.  Many options in Process explorer which are very useful to know about processes in windows.  I may be late to see this tool :) , but its very useful to know about Postgres process.

Process Explorer Download Link:
http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx

Once you start process explorer, in VIEW tab choose "lower pane" under it you have "Handles" option which shows process events in lower pane of explorer. Now, Process explorer will be divided in two windows as Upper Pane and Lower Pane, upper pane is to show all process and lower pane is to show each processes event,type,semaphore etc., By clicking process in upper pane (Eg: postgres processes as postgres.exe) will show complete details of process in lower pane.

Note: Make sure you run Process explorer as administrator.

Snapshot:

--Raghav

Monday, July 16, 2012

Swapping Provider, not within slony replicating nodes


My title might be slight contradictory, as per Slony-I, swapping can be achieved among nodes, if nodes are connected to each other by anyway as PROVIDER or RECEIVER and replicating. If you see in my diagram, "DR-Prod" is nowhere related to Slony replicating nodes, still swapping is possible(with some extra care). Here are some valueable inputs from Steve & Jan. Thanks.


When we need such kind of swapping, if you are planning to move "PROD" from one data center to another or from existing disk volumn to another (many reasons). Most of the features in PostgreSQL 9.1 WAL-Level replication suits such kind of shuffling, but what if your Prod(master) only to move without disturbing the existing Partial replication(Slony) slaves. Overall concept is to reuse the replicated Slony schema (_myrep) on  "DR-Prod" side and then point to Slony Slave when its promoted as Master. 

Note: My steps are only for Asynchronous replication and controlled Switchover. It won't function properly if there is unplanned failure, because 'slave' node might be further ahead via slony than your 'DR-Prod' node via asynchronous streaming replication.

Let me give a simple demo on local setup similar to diagram. Assume "Prod" and "Slony Slave" as master & slave databases running on 5432. Setup streaming replication as "DR-Prod" on port 5433 for 5432.

1. Setup master/slave slony replication on localhost on port 5432 and replicating only one table as shown here.

2. Setup streaming replication for 5432 cluster and run it on 5433. To setup streaming replication follow the PostgreSQL wiki.

3. Important steps :

  • Stop Slony daemons of master/slave on port 5432.
  • Promote "DR-Prod" as master, i.e., on 5433.
  • Change the pointers on both the ends i.e., DR-Prod (which is now acts as PROD) and Slony Slave of _myrep.sl_path table using storepath() function. As shown below:

On 5433, change the pointer to Slave database running on 5432 as shown below. 
master=# select _myrep.storepath(1,2,'host=127.0.0.1 dbname=master user=postgres port=5433',10);
 storepath
------------
 5000000093
(1 row)
master=# select _myrep.storepath(2,1,'host=127.0.0.1 dbname=slave user=postgres port=5432',10);
 storepath
------------
 5000000094
(1 row)
On 5432, change the pointer to Master database running on 5433 as shown below. 
slave=# select _myrep.storepath(1,2,'host=127.0.0.1 dbname=master user=postgres port=5433',10);
 storepath
------------
 5000000085
(1 row)

4. Now start the slon daemon on 5433 for Master database and on 5432 for Slave Database.

5. Hereafter any inserts on 5433 Master would replicate to Slave database on 5432.

Cheers
Raghav

Monday, July 2, 2012

Simple Slony-I Replication Setup.

Above shown is an overview about Slony-I Asynchronous replication in short. For more information,Slony-I documentation is your best friend :).

Let's start with replication methods, in perltools method,you need to configure slony at the time of source installation to enable built-in perl scripts. These scripts start with "SLONIK_" and they are designed to carry replication administrative tasks.

My demo for two methods shell(slonik) & Perl is on Localhost Single instance(5432) with two databases Master & Slave replicating one table "rep_table". For replication, master/Slave should hold same table structure. If you have many tables use pg_dump/pg_restore structure dump option. Since am replicating one table I just created the same on Master/Slave.
Note: Set environment variables like PGDATA,PGPORT,PGHOST,PGPASSWORD & PGUSER.

Source Installation:
Download the Slony-I 2.1 source(http://slony.info/downloads/) 

#bunzip2 slony1-2.1.0.tar.bz2
#tar -xvf slony1-2.1.0.tar
# cd slony1-2.1.0
#./configure --prefix=/opt/PostgreSQL/9.1/bin 
             --with-pgconfigdir=/opt/PostgreSQL/9.1/bin 
             --with-perltools=/opt/PostgreSQL/9.1/bin
             // Exclude --with-perltools if not needed
# make
# make install
Basic setup on Master/Slave
createdb -p 5432 master
createdb -p 5432 slave

psql -p 5432 -d master -c "create table rep_table(id int primary key);"
psql -p 5432 -d slave -c "create table rep_table(id int primary key);"

Insert some data on master to replicate to slave
psql -p 5432 -d master -c "insert into rep_table values(generate_series(1,10));"
Method 1:  --with-perltools :

1. Create on standard .conf file, with information like, Log location, No. of Nodes, Set of Tables etc.,
$CLUSTER_NAME = 'myrep';
$LOGDIR = '/opt/PostgreSQL/9.1/slonylogs';
$MASTERNODE = 1;
$DEBUGLEVEL = 2;

&add_node(node => 1,host => 'localhost',dbname => 'master',port => 5432,user => 'postgres',password => 'postgres');
&add_node(node => 2,host => 'localhost',dbname => 'slave',port => 5433,user => 'postgres',password => 'postgres');

$SLONY_SETS =
{
    "set1" =>
    {
        "set_id" => 1,
        "table_id" => 1,
        "pkeyedtables" =>
                       [rep_table,],
    },
};
Initialize, Create-set & subscribe-set, these are the three phases of slony replication. For each phase, "slonik_" perl scripts are created in the location mentioned at the time of source installation with option "--with-perltools". In my case its, "/opt/PostgreSQL/9.1/bin". Above CONF file is used in all phases.

2. Initialize the cluster. Here slonik, cross-checks the nodes connection.
cd /opt/PostgreSQL/9.1/bin
./slonik_init_cluster -c slon.conf 
./slonik_init_cluster -c slon.conf| ./slonik
3. Create a set, means which set of tables to replicate from Node 1 to Node 2.
./slonik_create_set -c slon.conf 1 
./slonik_create_set -c slon.conf 1|./slonik 
4. Start Slon daemons. Each node will have two slon process to carry work. Each node slon process should be started.
./slon_start -c slon.conf 1
./slon_start -c slon.conf 2
5. Subscribe Set, from here slony maintains data consistency between two nodes by allowing Master for all DML's and Denying them on Slave.
./slonik_subscribe_set -c slon.conf 1 2 
./slonik_subscribe_set -c slon.conf 1 2|./slonik
After the above steps now your slave will have replicated data.

Method 2: With standard scripts:

In Standard script methods, there are many way to implement, but to understand clearly I have split as same as Perl we did above like Initialize, create-set & subscribe set. All scripts are binded with SLONIK command.

1. Create two .conf files for Master & Slave Node.
vi master_slon.conf
cluster_name=myrep
pid_file='/opt/PostgreSQL/9.1/data/master_slon.pid'
conn_info='host=localhost dbname=master user=postgres port=5432'

vi slave_slon.conf
cluster_name=myrep
pid_file='/opt/PostgreSQL/9.1/data/slave_slon.pid'
conn_info='host=localhost dbname=slave1 user=postgres port=5432'
2. Initialize the cluster.
#!/bin/bash
# Initialize Cluster (init_cluster.sh)

slonik <<_eof_
cluster name = myrep;
node 1 admin conninfo='host=127.0.0.1 dbname=master user=postgres port=5432';
node 2 admin conninfo='host=127.0.0.1 dbname=slave1 user=postgres port=5432';

#Add Node
init cluster (id = 1, comment = 'Primary Node For the Slave postgres');
store node (id = 2, event node = 1, comment = 'Slave Node For The Primary postgres');

#Setting Store Paths ...
echo  'Stored all nodes in the slony catalogs';
store path(server = 1, client = 2, conninfo='host=127.0.0.1 dbname=master user=postgres port=5432');
store path(server = 2, client = 1, conninfo='host=127.0.0.1 dbname=slave1 user=postgres port=5432');
_eof_

$./init_cluster.sh
3. Create a set.
#!/bin/bash
# Create Set for set of tables (create-set.sh)

slonik <<_eof_
cluster name = myrep;
node 1 admin conninfo='host=127.0.0.1 dbname=master user=postgres port=5432';
node 2 admin conninfo='host=127.0.0.1 dbname=slave1 user=postgres port=5432';

try { create set (id = 1 ,origin = 1 , comment = 'Set for public'); } on error { echo  'Could not create set1'; exit 1;}

set add table (set id = 1 , origin = 1, id = 1, full qualified name = 'public.rep_table1', comment = 'Table action with primary key');
_eof_

$./create-set.sh
4. To start Slon daemons, use custom script which comes along with source tarbal under "/tools" location "start_slon.sh". Modify the script by changing .conf file locations for Master/slave startup scripts. This script will give flexibility to use and track all slon process with the help of PID's mentioned in .conf file.
Usage: ./master_start_slon.sh [start|stop|status]

-bash-4.1$ ./master_start_slon.sh  start
-bash-4.1$ ./slave_start_slon.sh  start

Sample STATUS output:

-bash-4.1$ ./master_start_slon.sh status
---------------------
Slony Config File    : /opt/PostgreSQL/9.1/slony_scripts/bash_slony/master_slon.conf
Slony Bin Path       : /opt/PostgreSQL/9.1/bin
Slony Running Status : Running...
Slony Running (M)PID : 28487
---------------------
4. Subscribe set.
#!/bin/bash
# Subscribe Set (subscribe-set.sh)

slonik <<_eof_
cluster name = myrep;
node 1 admin conninfo='host=127.0.0.1 dbname=master user=postgres port=5432';
node 2 admin conninfo='host=127.0.0.1 dbname=slave1 user=postgres port=5432';

try { subscribe set (id = 1, provider = 1 , receiver = 2, forward = yes, omit copy = false); } on error { exit 1; } echo  'Subscribed nodes to set 1';
_eof_

$./subscribe-set.sh
Now your slave database will have replicated data in "rep_table" table.
These two methods will help to understand the basic setup of slony replication. Will be back with more advanced slony concepts.

--Raghav

Tuesday, June 26, 2012

PostgreSQL Process names on Solaris

PostgreSQL Processes are very few and countable like, writer process, wal writer proces,stats collector,autovacuum process,syslogger process,archiver process & daemon postmaster. If replication enabled then there will be wal sender & wal receiver process. In my trainings, I use to show process information by executing "ps -ef | grep postgres", but how could I show the same on Solaris. So, I checked with Solaris Documentation and found its very simple and easy to get the process names as linux.

In PostgreSQL documentaion, its said to use /usr/ucb/ps with -ww options to get process names instead of regular /usr/bin/ps, however most of the information are hidden by /usr/ucb/ps option as well. Lets see how to retrieve complete postgres process names in solaris.

Below are my postgres 9.1 instance processes on Solaris:
bash-3.00$ /usr/ucb/ps -awwx | grep postgres
  7778 ?        S  0:04 /Desktop/postgres/9.1-pgdg/bin/64/postgres -D /Desktop/postgres/9.1-pgdg/data
  7779 ?        S  0:01 /Desktop/postgres/9.1-pgdg/bin/64/postgres -D /Desktop/postgres/9.1-pgdg/data
  7780 ?        S  0:00 /Desktop/postgres/9.1-pgdg/bin/64/postgres -D /Desktop/postgres/9.1-pgdg/data
  7781 ?        S  0:00 /Desktop/postgres/9.1-pgdg/bin/64/postgres -D /Desktop/postgres/9.1-pgdg/data
  7776 pts/5    S  0:00 /Desktop/postgres/9.1-pgdg/bin/64/postgres -D /Desktop/postgres/9.1-pgdg/data
More extended way with pargs:
bash-3.00$  pargs `/usr/ucb/ps -awwx | grep postgres | awk '{print $1}'`
7778:   /Desktop/postgres/9.1-pgdg/bin/64/postgres -D /Desktop/postgres/9.1-pgdg/data
argv[0]: postgres: writer process  
argv[1]:
argv[2]:

7779:   /Desktop/postgres/9.1-pgdg/bin/64/postgres -D /Desktop/postgres/9.1-pgdg/data
argv[0]: postgres: wal writer process  
argv[1]:
argv[2]:

7780:   /Desktop/postgres/9.1-pgdg/bin/64/postgres -D /Desktop/postgres/9.1-pgdg/data
argv[0]: postgres: autovacuum launcher process  
argv[1]:
argv[2]:

7781:   /Desktop/postgres/9.1-pgdg/bin/64/postgres -D /Desktop/postgres/9.1-pgdg/data
argv[0]: postgres: stats collector process  
argv[1]:
argv[2]:

7776:   /Desktop/postgres/9.1-pgdg/bin/64/postgres -D /Desktop/postgres/9.1-pgdg/data
argv[0]: /Desktop/postgres/9.1-pgdg/bin/64/postgres
argv[1]: -D
argv[2]: /Desktop/postgres/9.1-pgdg/data
7776 is postmaster daemon process.
bash-3.00$ cat /Desktop/postgres/9.1-pgdg/data/postmaster.pid
7776
/Desktop/postgres/9.1-pgdg/data
1339917119
5432
/tmp
localhost
  5432001  50331683
Though it seems simple I believe its worth to know :).

--Raghav

Tuesday, June 5, 2012

Upgrading Slony-I 2.0.x to latest version 2.1.x

Slony-1 2.1 has very good fixes and new features like adding Bulk tables, improvement on WAIT FOR with Merge Set/Move Set, support for TRUNCATE on replicating tables and many more. Am using Slony-I 2.0.7, so thought of upgrading it to latest version. Upgrading Slony-I is very simple and it can be achievable in few steps. My upgrade procedure assumes there is already Master/Slave setup with Slony 2.0.7.

Backup Plan:
1. Backup the existing slony schema (_slonyschema) of master/slave
2. Backup the OLD Slony Binaries
3. Backup all initially creates slony configuration files.

Upgrade Procedure:
1. Stop all running slon proces on all nodes.
2. Install new Version of Slony 2.1.x binaries.
3. Execute SLONIK upgradation script
4. Start slony with new binaries on all nodes.

Link: http://slony.info/documentation/2.1/slonyupgrade.html
Current PostgreSQL & Slony version:

repdb=# select substr(version(),1,26) as "PostgreSQL-Version",_myrep.slonyversion();
     PostgreSQL-Version     | slonyversion
----------------------------+--------------
 PostgreSQL 9.1.3 on x86_64 | 2.0.7
(1 row)
Install/Configure Latest version of Slony-I 2.1.x source
 wget http://main.slony.info/downloads/2.0/source/slony1-2.1.0.tar.bz2
 ./configure --prefix=/opt/PostgreSQL/9.1/bin --with-pgconfigdir=/opt/PostgreSQL/9.1/bin
 make
 make install

After installation, you can find three executables slon, slonik & slon_logshipper under "/opt/PostgreSQL/9.1/bin/bin".  

-bash-4.1$ ./slon -v
slon version 2.1.0
Upgradation Script:
## Upgrade script

cluster name = myrep;
node 1 admin conninfo='host=localhost dbname=postgres user=postgres port=5432';
node 2 admin conninfo='host=localhost dbname=repdb user=postgres port=5433';
UPDATE FUNCTIONS (  ID = 1 );
UPDATE FUNCTIONS (  ID = 2 );

Note: Update all the nodes with UPDATE FUNCTIONS. I have two nodes Master(5432) and Slave(5433). 
Execute the script:
-bash-4.1$ slonik upgrade_207_201.slonik
Start the slony process with new binaries and check for the changes.
postgres=# select substr(version(),1,26) as "PostgreSQL-Version",_myrep.slonyversion();
     PostgreSQL-Version     | slonyversion
----------------------------+--------------
 PostgreSQL 9.1.3 on x86_64 | 2.1.0
(1 row)
You can see my slony version has been upgraded to latest. You can also perform health check on the schema with a function provided by Slony-I in their documenation. Health Check function should return TRUE, else somewhere your PG & Slony catalogs are damaged.
Function link: http://slony.info/documentation/2.1/function.slon-node-health-check.html
postgres=# select node_health_check();
 node_health_check
-------------------
 t
(1 row)
--Raghav

Saturday, June 2, 2012

Compiling PL/Proxy with PostgresPlus Advance Server 9.1

PostgresPlus Advance Server 9.1(PPAS) is EnterpriseDB product, which comes with enterprise features as additional with community PostgreSQL. Most of the contrib modules(pgfoundry) can be pluged into this product using Stackbuilder. However,currently Pl/Proxy is not bundled or downloadable with Stack-builder. So,here is how you can compile the Pl/Proxy with PPAS 9.1.

1. Download Pl/Proxy.
wget http://pgfoundry.org/frs/download.php/3274/plproxy-2.4.tar.gz
tar -xvf plproxy-2.4.tar.gz
make PG_CONFIG=/opt/PostgresPlus/9.1AS/bin/pg_config
make intall PG_CONFIG=/opt/PostgresPlus/9.1AS/bin/pg_config

Note: Flex & Bison must be installed before compiling pl/proxy.

2. After sucessfull configuration, you get two files, plproxy.so in $PGPATH/lib & plproxy--2.4.0.sql in $PGPATH/share/extention/ location.
Execute the .sql file which creates call_handler & language.
bash-4.1$ psql -p 5444 -U enterprisedb -d edb -f /opt/PostgresPlus/9.1AS/share/extension/plproxy--2.4.0.sql
CREATE FUNCTION
CREATE LANGUAGE
CREATE FUNCTION
CREATE FOREIGN DATA WRAPPER

Now you can see the language installed.
edb=# \dL
        List of languages
  Name   |    Owner     | Trusted
---------+--------------+---------
 edbspl  | enterprisedb | t
 plpgsql | enterprisedb | t
 plproxy | enterprisedb | f
(3 rows)

3. Lets test the sample code with pl/proxy.
create table users(username text,blog text);
insert into users values('Raghav','raghavt.blogspot.com');

CREATE or replace  FUNCTION get_user_blog(i_username text)
RETURNS SETOF text AS $$
    CONNECT 'dbname=edb';
    SELECT blog FROM users WHERE username = $1;
$$ LANGUAGE plproxy;

edb=# select * from get_user_blog('Raghav');
          get_user_blog
----------------------------------
 raghavt.blogspot.com
(1 rows)

All set to go testing with pl/proxy on PPAS 9.1. If you want to know how to setup pl/proxy, follow below links.
http://www.depesz.com/2011/12/02/the-secret-ingredient-in-the-webscale-sauce/
http://kaiv.wordpress.com/2007/07/27/postgresql-cluster-partitioning-with-plproxy-part-i/

--Raghav

Tuesday, May 29, 2012

Autonomous Transaction in PostgreSQL 9.1

Currently am working on Migrations from Oracle to PostgreSQL. Though am DBA, these days am learning a bit on Developer track too ... :)
Let's see a small feature of Oracle and a similar way in PostgreSQL.

Autonomous Transaction,what is it ?

An autonomous transaction is an independent transaction that is initiated by another transaction, and executes without interfering with the parent transaction. When an autonomous transaction is called, the originating transaction gets suspended. Control is returned when the autonomous transaction does a COMMIT or ROLLBACK.

Example in Oracle:
Create two tables and one procedure as shown below.

create table table_a(name varchar2(50));
create table table_b(name varchar2(50));

create or replace procedure insert_into_table_a is
begin
   insert into table_a values('Am in A');
   commit;
end;

Lets test it here.

SQL> begin
  2  insert into table_b values('Am in B');
  3  insert_into_table_a;
  4  rollback;
  5  end;
  6  /

PL/SQL procedure successfully completed.

SQL> select * from table_a;

Am in A

SQL> select * from table_b;

Am in B
In my example above, line 3 has committed the line 2, where it has to rollback according to line 4. In my example am looking for a transaction blocks to behave independently, to achieve it in Oracle we need to include PRAGMA autonomous_transaction in the Procedure declaration to behave as independent transaction block. Lets Retake:
Truncate table table_a;
Truncate Table table_b;

create or replace procedure insert_into_table_a is pragma autonomous_transaction;
begin
   insert into table_a values('Am in A');
   commit;
end;

SQL> begin
  2  insert into table_b values('Am in B');
  3  INSERT_INTO_TABLE_A;
  4  rollback;
  5  end;
  6  /

PL/SQL procedure successfully completed.

SQL> select * from table_a;

NAME
----------
Am in A

SQL> select * from table_b;

no rows selected
How to make work in PostgreSQL ?

Autonomous Transaction, are very well controlled in Oracle. Similar functionality is not there in PostgreSQL, however you can achieve with a hack using dblink. Below is the link, where hack has been provided:
http://archives.postgresql.org/pgsql-hackers/2008-01/msg00893.php
create extension dblink;

create or replace function insert_into_table_a() returns void as $$
begin
    perform dblink_connect('pragma','dbname=edb');
    perform dblink_exec('pragma','insert into table_a values (''Am in A'');');
    perform dblink_exec('pragma','commit;');
    perform dblink_disconnect('pragma');
end;
$$ language plpgsql;

edb=# begin;
BEGIN
edb=# insert into table_b VALUES ('am in B');
INSERT 0 1
edb=# select insert_into_table_a();
 insert_into_table_a
---------------------

(1 row)

edb=# select * from table_a;
  name
---------
 Am in A
(1 row)

edb=# select * from table_b;
  name
---------
 am in B
(1 row)

edb=# rollback;
ROLLBACK
edb=# select * from table_a;
  name
---------
 Am in A
(1 row)

edb=# select * from table_b;
 name
------
(0 rows)

Is it not simple, thanks to the hack provider.

--Raghav



Monday, April 16, 2012

Caching in PostgreSQL

Caching...!!, its little bit hard to go in brief with single article. But will try to share my knowledge learnt from Heikki / Robert Haas / Bruce Momjian in short.  In PostgreSQL, there are two layers, PG shared buffers and OS Page cache, any read/write should pass through OS cache(No bypassing till now). Postgres writes data on OS Page Cache and confirms to user as it has written to disk, later OS cache write's to physical disk in its own pace. PG shared buffers has no control over OS Page Cache and it not even know what's in OS cache. So, most of the recommendation's given by Postgres DBA's/Professional's to have faster DISK / better cache.

Caches/buffers in PostgreSQL are stronger like other databases and highly sophisticated. As am from Oracle background (mindset also…:) ), so, my question's from whom I learnt was how/when/what/why etc., regarding Database buffer cache, pinned buffers, Flushing database buffers cache, preloading database etc., I got all my answers from them, however, the approach is bit different. Though my questions were bugging, they answered with great patience and clarifying me to good extent which in result you are reading this blog.... :)..


On some learnings(still learning), I drawn a small overview of how data flow between Memory to Disk in Postgres and also some of the important tools and NEW patch by Robert Haas(pg_prewarm).

pg_buffercache
A contrib module, which tells whats in PostgreSQL buffer cache. Installation below:-
postgres=# CREATE EXTENSION pg_buffercache;
pgfincore
It has a functionality to give the information about what data in OS Page Cache. Pgfincore, module become's very handy when it is clubbed with pg_buffercache, now one can get PG buffer cache & OS Page Cache information together. Thanks to Cerdic Villemain. Pgfincore, backbone is fadvise, fincore which are linux ftools. You can also use fincore/fadvise by installing source. Two thing, you can use pgfincore contrib module or ftools both result the same. I tried both, they are simply awesome.
Installation:
Download the latest version: http://pgfoundry.org/frs/download.php/3186/pgfincore-v1.1.1.tar.gz
As root user:
export PATH=/usr/local/pgsql91/bin:$PATH     //Set the path to point pg_config.
tar -xvf pgfincore-v1.1.1.tar.gz
cd pgfincore-1.1.1
make clean
make 
make install

Now connect to PG and run below command

postgres=# CREATE EXTENSION pgfincore;
pg_prewarm
Preloading the relation/index into PG buffer cache. Is it possible in PostgreSQL? oh yes, Thanks to Robert Haas, who has recently submitted patch to community, hopefully it might be available in PG 9.2 or PG 9.3. However, you can use the patch for your testing on PG 9.1.

pg_prewarm has three MODE's:

  1. PREFETCH: Fetching data blocks asynchronously into OS cache only not into PG buffers (hits OS cache only)
  2. READ: Reads all the blocks into dummy buffer and forces into OS cache. (hits OS cache only)
  3. BUFFER: reads all the blocks or range of blocks into database buffer cache.

Installation:
I am applying pg_prewarm patch on my PG source installation, you need to tweak as per your setup.

  1. Untar location of PG source : /usr/local/src/postgresql-9.1.3
  2. PG installation locatin : /usr/local/pgsql91
  3. All downloads Location : /usr/local/src

Note: Install PG before applying pg_prewarm patch.

1. Download the patch to /usr/local/src/ location
http://archives.postgresql.org/pgsql-hackers/2012-03/binRVNreQMnK4.bin
Patch attached Email:
http://archives.postgresql.org/message-id/CA+TgmobRrRxCO+t6gcQrw_dJw+Uf9ZEdwf9beJnu+RB5TEBjEw@mail.gmail.com
2. After download go to PG source location and follow the steps.
# cd /usr/local/src/postgresql-9.1.3
# patch -p1 < ../pg_prewarm.bin         (I have renamed after download)
# make -C contrib/pg_prewarm
# make -C contrib/pg_prewarm install
3. Above command will create files under $PGPATH/contrib/extension. Now you are ready to add the contrib module.
postgres=# create EXTENSION pg_prewarm;
CREATE EXTENSION
postgres=# \dx
                          List of installed extensions
      Name      | Version |   Schema   |              Description
----------------+---------+------------+----------------------------------------
 pg_buffercache | 1.0     | public     | examine the shared buffer cache
 pg_prewarm     | 1.0     | public     | prewarm relation data
 pgfincore      | 1.1.1   | public     | examine and manage the os buffer cache
 plpgsql        | 1.0     | pg_catalog | PL/pgSQL procedural language
(4 rows)

Documentation:
/usr/local/src/postgresql-9.1.3/doc/src/sgml
[root@localhost sgml]# ll pgpre*
-rw-r--r-- 1 root root 2481 Apr 10 10:15 pgprewarm.sgml
dstat
A combination of vmstat,iostat,netstat,top,etc., tool together in one "dstat" linux command. When database behaving unusually, to know the cause from OS level, we open couple of terminals to pull process, memory,disk read/writes, network informations, which is little bit pain to shuffle between windows. So, dstat has serveral options with in it, which helps to show all commands in one output one window.
Installation:
Dstat download link: (RHEL 6)
wget http://pkgs.repoforge.org/dstat/dstat-0.7.2-1.el6.rfx.noarch.rpm
or
yum install dstat
Documentation: http://dag.wieers.com/home-made/dstat/
Linux ftools
Its designed for working with modern linux system calls including, mincore, fallocate, fadvise, etc. Ftools, will help you to figure out what files are in OS cache. Using perl/python scripts you can retrieve OS page cache information on object files (pg_class.relfilenode). pg_fincore is based on this. You can use pgfincore or ftools scripts.
Installation:
Download the tar.gz from the link.
https://github.com/david415/python-ftools

cd python-ftools
python setup.py build
export PYTHONPATH=build/lib.linux-x86_64-2.5
python setup.py install

Note: You need to have python & psycopg2 installed before installing python-ftools.
Now, we are all set to proceed with example to check with the tools & utilities. In my example, I have a table, it has one index & sequence with 100+ MB of data in it.
postgres=# \d+ cache
Table "public.cache"
Column |  Type   |                Modifiers                | Storage  | Description
--------+---------+-----------------------------------------+----------+-------------
name   | text    |                                         | extended |
code   | integer |                                         | plain    |
id     | integer | default nextval('icache_seq'::regclass) | plain    |
Indexes:
"icache" btree (code)
Has OIDs: no
Query to know the size occupied by table,sequence and its index.
postgres=# SELECT c.relname AS object_name,
CASE when c.relkind='r' then 'table'
when c.relkind='i' then 'index'
when c.relkind='S' then 'sequence'
else 'others'
END AS type,pg_relation_size(c.relname::text) AS size, pg_size_pretty(pg_relation_size(c.relname::text)) AS pretty_size
FROM pg_class c
JOIN pg_roles r ON r.oid = c.relowner
LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
WHERE (c.relkind = ANY (ARRAY['r'::"char", 'i'::"char", 'S'::"char",''::"char"])) AND n.nspname = 'public';

object_name |   type   |   size   | pretty_size
-------------+----------+----------+-------------
icache_seq  | sequence |     8192 | 8192 bytes
cache       | table    | 83492864 | 80 MB
icache      | index    | 35962880 | 34 MB
(3 rows)

Total object size 'cache'

postgres=# select pg_size_pretty(pg_total_relation_size('cache'));
pg_size_pretty
----------------
114 MB
(1 row)
I have written small query by clubbing pgfincore and pg_buffercache to pull information from PG Buffer & OS Page cache. I will be using this query through out my example, only pasting this query outputs.
select rpad(c.relname,30,' ') as Object_Name,
case when c.relkind='r' then 'Table' when c.relkind='i' then 'Index' else 'Other' end as Object_Type, 
rpad(count(*)::text,5,' ') as "PG_Buffer_Cache_usage(8KB)",
split_part(pgfincore(c.relname::text)::text,','::text,5) as "OS_Cache_usage(4KB)"
from pg_class c inner join pg_buffercache b on b.relfilenode=c.relfilenode
     inner join pg_database d on (b.reldatabase=d.oid and d.datname=current_database() and c.relnamespace=(select oid from pg_namespace where nspname='public'))
group by c.relname,c.relkind
order by "PG_Buffer_Cache_usage(8KB)"
desc limit 10;

object_name | object_type | PG_Buffer_Cache_usage(8KB) | OS_Cache_usage(4KB)
-------------+-------------+----------------------------+---------------------
(0 rows)

Note: I have bounced the cluster to flush PG buffers & OS Page Cache. So, no data in any Cache/buffer.
Preloading relation/index using pg_prewarm:
Before, bouncing the cluster I have fired full table sequential scan query on "Cache" table, and noted the time which is before warming the relation/index.
postgres=# explain analyze select * from cache ;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------
Seq Scan on cache  (cost=0.00..26192.00 rows=1600000 width=19) (actual time=0.033..354.691 rows=1600000 loops=1)
Total runtime: 427.769 ms
(2 rows)
Lets warm relation/index/sequence using pg_prewarm and check query plan.
postgres=# select pg_prewarm('cache','main','buffer',null,null);
pg_prewarm
------------
10192
(1 row)
postgres=# select pg_prewarm('icache','main','buffer',null,null);
pg_prewarm
------------
4390
(1 row)

Output of combined buffers:
object_name | object_type | PG_Buffer_Cache_usage(8KB) | OS_Cache_usage(4KB)
-------------+-------------+----------------------------+---------------------
icache      | Index       | 4390                       | 8780
cache       | Table       | 10192                      | 20384
(2 rows)
pgfincore output:
postgres=# select relname,split_part(pgfincore(c.relname::text)::text,','::text,5) as "In_OS_Cache" from pg_class c where relname ilike '%cache%';
relname   | In_OS_Cache
------------+-------------
icache_seq | 2
cache      | 20384
icache     | 8780
(3 rows)

or for each object.

postgres=# select * from pgfincore('cache');
relpath      | segment | os_page_size | rel_os_pages | pages_mem | group_mem | os_pages_free | databit
------------------+---------+--------------+--------------+-----------+-----------+---------------+---------
base/12780/16790 |       0 |         4096 |        20384 |     20384 |         1 |        316451 |
(1 row)
To retrieve similar information using python-ftools script you need to know objects relfilenode number, check below.
postgres=# select relfilenode,relname from pg_class where relname ilike '%cache%';
relfilenode |    relname
-------------+----------------
16787 | icache_seq       /// you can exclude sequence.
16790 | cache            /// table
16796 | icache           /// index
(3 rows)
using python-ftools script


Is it not interesting....!!!!.
Now compair the explain plan after warming table into buffer.
postgres=# explain analyze select * from cache ;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------
Seq Scan on cache  (cost=0.00..26192.00 rows=1600000 width=19) (actual time=0.016..141.804 rows=1600000 loops=1)
Total runtime: 215.100 ms
(2 rows)
How to flush/prewarm relation/index in OS cache ?
Using pgfadvise, you can preload or flush the relation from the OS cache. For more information, type \df pgfadvise* in terminal for all functions related to pgfadvise. Below is example of flushing the OS cache.
postgres=# select * from pgfadvise_dontneed('cache');
relpath      | os_page_size | rel_os_pages | os_pages_free
------------------+--------------+--------------+---------------
base/12780/16790 |         4096 |        20384 |        178145
(1 row)
postgres=# select * from pgfadvise_dontneed('icache');
relpath      | os_page_size | rel_os_pages | os_pages_free
------------------+--------------+--------------+---------------
base/12780/16796 |         4096 |         8780 |        187166
(1 row)
postgres=# select relname,split_part(pgfincore(c.relname::text)::text,','::text,5) as "In_OS_Cache" from pg_class c where relname ilike '%cache%';
relname   | In_OS_Cache
------------+-------------
icache_seq | 0
cache      | 0
icache     | 0
(3 rows)
While these things are going on in one window you can check the read/write ratio by using dstat. For more options use dstat --list
dstat -s --top-io --top-bio --top-mem

Preloading Range of block's using pg_prewarm range functionality.
Assume,due to some reason, you want to bounce the cluster, but one of big table which is in buffer is performing well. On bouncing, your table no more in buffers, to get back to original state as it was before bouncing then you have to know how many table blocks were there in buffers and preload them using pg_prewarm range option.

I have created a table by querying pg_buffercache and later I have sent block range information to pg_prewarm. By this, shared buffers is back with the table earlier loaded in it. See the example.
select c.relname,count(*) as buffers from pg_class c 
inner join pg_buffercache b on b.relfilenode=c.relfilenode and c.relname ilike '%cache%' 
inner join pg_database d on (b.reldatabase=d.oid and d.datname=current_database()) 
group by c.relname 
order by buffers desc;
relname | buffers
---------+---------
cache   |   10192
icache  |    4390
(2 rows)
Note: These are the blocks in buffer.

postgres=# create table blocks_in_buff (relation, fork, block) as select c.oid::regclass::text, case b.relforknumber when 0 then 'main' when 1 then 'fsm' when 2 then 'vm' end, b.relblocknumber from pg_buffercache b, pg_class c, pg_database d where b.relfilenode = c.relfilenode and b.reldatabase = d.oid and d.datname = current_database() and b.relforknumber in (0, 1, 2);
SELECT 14716
Bounce the cluster and preload the range of blocks related to table into buffers from the "blocks_in_buff".
postgres=# select sum(pg_prewarm(relation, fork, 'buffer', block, block)) from blocks_in_buff;
sum
-------
14716
(1 row)

postgres=# select c.relname,count(*) as buffers from pg_class c
inner join pg_buffercache b on b.relfilenode=c.relfilenode and c.relname ilike '%cache%'
inner join pg_database d on (b.reldatabase=d.oid and d.datname=current_database())
group by c.relname
order by buffers desc;
relname | buffers
---------+---------
cache   |   10192
icache  |    4390
(2 rows)
See, my shared_buffer's is back in play.

Enjoy…!!! will be back with more interesting stuff. Do post your comments.

--Raghav

Sunday, April 8, 2012

Duplicate Rows in a primary key Table.

Back again, getting very less time for blogging :)

"ERROR: could not create unique index
DETAIL: Table contains duplicated values."

This error is thrown out by Postgres when it encounters duplicate rows in a primary key table by failing any of these command REINDEX or CREATE UNIQUE INDEX.

Why duplicate rows exists in a table ?

Not sure exactly :) nor any proved explainations out...
Two thing to my mind.

Firstly, it might be delayed index creation or if you have shared sequences in a database, sharing on two different Primary key Tables might be the cause while restoring the data into table (pg_restore). Secondly, if any huge transaction is taking place on that table and at the backend someone has abruptly stopped the instance, which might also fail the index(primary key) to point to the right row.

How to fix it ?

Well,as common practice, when we encounter a duplicate rows in a table (despite of any reason), we first filter the duplicate rows and delete them, and later by doing REINDEX should fix the issue.

Query for finding duplicate rows:
select count(*),primary_column from table_name group by primary_column having count(*) > 1;
Even after deleting the duplicate rows REINDEX or CREATE UNIQUE INDEX fails, it means your index is not cleaned properly. Above query might not be giving 100% result oriented output what you are expecting, because the query is going to pick the index which is already corrupted with duplicate rows. See the explain plan below.
postgres=# explain select count(*),id from duplicate_test group by id having count(*) > 1;
                                              QUERY PLAN
-------------------------------------------------------------------------------------------------------
 GroupAggregate  (cost=0.00..5042.90 rows=99904 width=4)
   Filter: (count(*) > 1)
   ->  Index Scan using duplicate_test_pkey on duplicate_test  (cost=0.00..3044.82 rows=99904 width=4)
(3 rows)
We need to catch CTID of duplicate rows from the main table and delete with conditional statement as CTID + PRIMARY KEY VALUE.

I have played a bit with pg_catalogs to voilate Primary Key Table to reproduce the scenario with similar error. (Please don't it)
postgres=# create unique index idup on duplicate_test(id);
ERROR:  could not create unique index "idup"
DETAIL:  Key (id)=(10) is duplicated.
My Table Definition & Data:
postgres=# \d duplicate_test
Table "public.duplicate_test"
 Column |  Type   | Modifiers
--------+---------+-----------
 id     | integer | not null
 name   | text    |
Indexes:
    "duplicate_test_pkey" PRIMARY KEY, btree (id)

postgres=# select * from duplicate_test ;
 id |  name
----+---------
 10 | Raghav    ---Duplicate
 20 | John H
 30 | Micheal
 10 | Raghav    ---Duplicate
(4 rows)
Now, lets fix this....

Step 1. Create a new table from effected table by pulling only two column values CTID and PRIMARY KEY.
postgres=# CREATE TABLE dupfinder AS SELECT ctid AS tid, id FROM duplicate_test;
SELECT 4
Step 2. Now, lets run the duplicate finder query with CTID to get the exact duplicates.
postgres=# select * from dupfinder x where exists (select 1 from dupfinder y where x.id = y.id and x.tid != y.tid);
  tid  | id
-------+----
 (0,1) | 10
 (0,5) | 10
(2 rows)
Step 3. On above result, now you can delete one row from main table(effected table) with CTID.
postgres=# delete from duplicate_test where ctid='(0,5)' and id=10;
DELETE 1
Step 4. Now, your REINDEX or CREATE UNIQUE INDEX will be successful.
postgres=# create unique index idup on duplicate_test(id);
CREATE INDEX

postgres=# select * from duplicate_test ;
 id |  name
----+---------
 10 | Raghav
 20 | John H
 30 | Micheal
(3 rows)
Step 5. Don't forget to do immediate VACUUM ANALYZE on the table to update the system catalogs as well CTID movement.

Please do share your comments.

--Raghav

Saturday, February 11, 2012

Londiste Replication with PostgreSQL 9.0

Londiste, Asynchronous Master/Slave replication tool developed by Skytools. Its very simple and user-friendly created like Slony. Core logic behind Londiste or Slony is Remote Triggering. Whereas londiste follows events queuing model which is not their in Slony - I.

Overview on Skytools:
Skytools is a Python-based application, it comes with a bundle of three things PgQ,Londiste & Walmgr and it also requires the Python-Postgres driver 'psycopg2'.

  • PGQ : Its queue mechanism built with pl/pgsql with phython framework on top of it.
  • Londiste: A replication tool written in Phyton using PgQ as events transporter.
  • Walmgr : Creates a WAL archiving setup.

Am not going to describe much here regarding londiste replication daemon process etc., because you can find the best tutorial regarding Skytools(PgQ/Londiste/WalMgr) in this link http://skytools.projects.postgresql.org/doc/.

Basically, my demo include how to proceed with Londiste replication with PostgreSQL 9.0 along with installation steps. I say, Skytools documentation and PostgreSQL Wiki (http://wiki.postgresql.org/wiki/Londiste_Tutorial) is more then anything to play around with Londiste replication.

Pre-Requisites with Download Links :


My Demo includes following :-
OS                     : RHEL 6 32 bit
DB version             : PostgreSQL 9.0
Two Clusters & Database: londiste_provider on 5432,Londiste_subscriber on 5433
Table                  : One Table (ltest)
Location of .ini file  : /opt/skytools-2.1.12/scripts
Location of Skytools   : /opt/skytools-2.1.12
Location of PG 9.0     : /opt/PostgreSQL/9.0/
As its simple demo with one table, I have tried with RHEL 6 32bit/PostgreSQL 9.0 with two clusters in my local box. You would need to tweak it as per the actual requirements.

Note: Before moving forward with setup, I would like to remind that all source installations must be as root user and after installation those directories should own Postgres user permissions.

Step 1.
Install PostgreSQL 9.0 and create two clusters with INITDB command and make sure they run on 5432 & 5433 each. (Remember, its a old fact that with INITDB command pg_log directory will not be created under Data_directory you need to create it explicitly.)

Step 2.
Install skytools by downloading from the above link. Its best practice to keep all sources in one common standard location. I used '/usr/local/src' and skytools under '/opt/'. Now configure skytools with PostgreSQL 9.0 'pg_config'.
# tar -xvf skytools-2.1.12.tar.gz
# cd /usr/local/src/skytools-2.1.12
# ./configure --prefix=/opt/skytools-2.1.12 --with-pgconfig=/opt/PostgreSQL/9.0/bin/pg_config
# make 
# make install
Note: After the installation you will see two important contrib modules(pgq & londiste) under PostgreSQL contrib location. Basically, these two contrib's gives you the functionality of londiste replication.
# cd /opt/PostgreSQL/9.0/share/postgresql/contrib
# ll lond*
-rw-r--r--. 1 root root 29771 Jan 11 13:24 londiste.sql
-rw-r--r--. 1 root root 27511 Jan 11 13:24 londiste.upgrade.sql

# ll pgq*
-rw-r--r--. 1 root root  4613 Jan 11 13:24 pgq_ext.sql
-rw-r--r--. 1 root root  1170 Jan 11 13:24 pgq_lowlevel.sql
-rw-r--r--. 1 root root 69798 Jan 11 13:24 pgq.sql
-rw-r--r--. 1 root root  3940 Jan 11 13:24 pgq_triggers.sql
-rw-r--r--. 1 root root 54182 Jan 11 13:24 pgq.upgrade.sql
Step 3.
Install psycopg2, its a phyton-postgres driver which is necessary for skytools. Sometime these driver's wont come with python, so here are the installation steps.
# tar -xvf psycopg2-2.4.2.tar.gz
# cd psycopg2-2.4.2
# python setup.py install --prefix=/usr/local
# python setup.py build_ext --pg-config /opt/PostgreSQL/9.0/bin/pg_config
Step 4.
Give ownership of Postgres to skytools and postgres installation location. This makes sure that all files/executables are with Postgres User permissions.
# chown -R postgres:postgres /opt/skytools-2.1.12 
# chown -R postgres:postgres /opt/PostgreSQL/9.0/
Step 5.
Set the LD_LIBRARY_PATH & PYTHONPATH and start the two newly created clusters. You can place them in .bash_profile of postgres user as permanent solution.
$export PYTHONPATH=/opt/skytools-2.1.12/lib/python2.6/site-packages/
$export LD_LIBRARY_PATH=/opt/PostgreSQL/9.0/lib:/usr/lib:/usr/lib/perl5/5.10.0/i386-linux-thread-multi/CORE:
or 
$ vi .bash_profile 
export PYTHONPATH=/opt/skytools-2.1.12/lib/python2.6/site-packages/
export LD_LIBRARY_PATH=/opt/PostgreSQL/9.0/lib:/usr/lib:/usr/lib/perl5/5.10.0/i386-linux-thread-multi/CORE:
:wq
$ . .bash_profile (execute to take effect of new settings)

Now Start the two cluster

$ pg_ctl -o "-p 5432" -D /opt/PostgreSQL/9.0/data start
$ pg_ctl -o "-p 5433" -D /opt/PostgreSQL/9.0/data_1 start
Step 6.
Create two databases, londiste_provider in 5432 and londiste_subscriber in 5433. Create one table with primary key name 'ltest' in two databases and INSERT some data in londiste_provider (ltest) table and later completion of replication setup you should see those INSERT data in londiste_subscriber side.

You may not need CRETAE TABLE on slave side, instead you can use structure dump/restore by using pg_dump/pg_restore, if you have many tables.
On 5432
psql -p 5432 -c "create database londiste_provider;"
psql -p 5432 londiste_provider
londiste_provider=# create table ltest(id int primary key);
londiste_provider=# insert into ltest VALUES (generate_series(1,10));
INSERT 0 10

On 5433
psql -p 5433 -c "create database londiste_subscriber;"
psql -p 5433 londiste_subscriber
londiste_subscriber=# create table ltest(id int primary key);
Step 7.
Create two .ini files one for londiste(londiste.ini) and another for PgQ ticker(pgq_ticker.ini). You can also find the sample .ini files from base installation of skytools. Eg:- "/opt/skytools-2.1.12/share/doc/skytools/conf" location.

Step 8.
Create two directories for log's and PID's files and point them in the parameters of londiste.ini and pgq_ticker.ini.
$ cd /opt/PostgreSQL/9.0
$ mkdir log pid
Step 9.
Start the replication with .ini files, firstly install londiste on provider and subscriber and then start the ticker (PgQ) for replicating the tables.

Install londiste on provider and subscriber with below commands one by one:
$ cd /opt/skytools-2.1.12/bin
$ ./londiste.py ../scripts/londiste.ini provider install
2012-01-12 14:56:03,667 11073 INFO plpgsql is installed
2012-01-12 14:56:03,674 11073 INFO txid_current_snapshot is installed
2012-01-12 14:56:03,675 11073 INFO Installing pgq
2012-01-12 14:56:03,676 11073 INFO   Reading from /opt/skytools-2.1.12/share/skytools/pgq.sql
2012-01-12 14:56:03,816 11073 INFO Installing londiste
2012-01-12 14:56:03,816 11073 INFO   Reading from /opt/skytools-2.1.12/share/skytools/londiste.sql

-bash-4.1$ ./londiste.py ../scripts/londiste.ini subscriber install
2012-01-12 14:56:17,871 11081 INFO plpgsql is installed
2012-01-12 14:56:17,872 11081 INFO Installing londiste
2012-01-12 14:56:17,873 11081 INFO   Reading from /opt/skytools-2.1.12/share/skytools/londiste.sql

-->Now, Install PqQ and start ticker with .ini file. 

-bash-4.1$ ./pgqadm.py ../scripts/pgqadm.ini install
2012-01-11 16:45:03,219 6348 INFO plpgsql is installed
2012-01-11 16:45:03,225 6348 INFO txid_current_snapshot is installed
2012-01-11 16:45:03,228 6348 INFO pgq is installed 

-bash-4.1$ ./pgqadm.py -d ../scripts/pgqadm.ini ticker -d

-->Add the table to provider & subscriber to replicate.

-bash-4.1$ ./londiste.py ../scripts/londiste.ini provider add ltest
2012-01-12 15:03:39,583 11139 INFO Adding public.ltest

-bash-4.1$ ./londiste.py ../scripts/londiste.ini subscriber add ltest
2012-01-12 15:03:47,367 11146 INFO Checking public.ltest
2012-01-12 15:03:47,384 11146 INFO Adding public.ltest
After adding start the replication of the table.
-bash-4.1$ ./londiste.py ../ scripts/londiste.ini replay -d

Note: "-d" option is to run the londiste/PgQ daemons in background. 
Here complete the replication setup. Now you should see the "ltest" table data on Slave Side(i.e. on 5433 port).

Step 10.
Now lets understand what all happend in the background to table/logs/pids/data etc., Lets see one by one.
Logs Information:
Table Structure after replication:

Event Queue Status
Replication status can be checked with pgq utility as below:-
-bash-4.1$ ./pgqadm.py ../scripts/pgqadm.ini status
Postgres version: 9.0.1   PgQ version: 2.1.8

Event queue                                    Rotation        Ticker   TLag
------------------------------------------------------------------------------
londiste.replica                                3/7200s    500/3s/60s     6s
------------------------------------------------------------------------------

Consumer                                                       Lag  LastSeen
------------------------------------------------------------------------------
londiste.replica:
  myfirstlondiste                                               6s        6s
------------------------------------------------------------------------------
Note: There are very good options with Londiste & PGQ utilities to do R & D.
Hoping you all have a successful Londiste replication setup. Please do post your comments those are highly appreciated. See you all soon with some more postings.

--Raghav

Thursday, January 26, 2012

Resize VARCHAR column of a Large Tables


Note: Recommended not to tamper pg_catalogs.

On forum, I saw an interesting posting and also the solution, however few things of that solution made me to test it. Scenario is, "How to resize the VARCHAR column on a large table with less time and what are best  approach's". As known standard way is to, Create a NEW column with desired size, Copy OLD data to newly created column, Drop the OLD column and finally rename the NEW with OLD column name. Be noted that am talking here about 100 million rows :)

Another approach is to modify PostgreSQL pg_catalog's with new SIZE in the pg_attribute relation. Below are the steps.
  1. Drop if you have indexes on the RESIZE column
  2. Make the database into READ-ONLY mode (PG 9.x)
  3. Use UPDATE command on the pg_attribute relation on the column atttypmod(column size) and attname (column Name)
Command:
update pg_attribute set atttypmod = atttypmod + (desired Resize) where attrelid=<relation OID> and attname='<column Name>';

Above command will update the pg_attribute relation with new column SIZE and allow you to insert the data according to new size. Here the table data not reformed with new SIZE instead its been overlooked by pg_catalogs changes.

Disadvantage:

You cannot decrease the size, if you do, then VARCHAR column size becomes ZERO and wont allow you to enter any data into the table. You get below error

ERROR:  value too long for type character(0)

Will be back with more stuff. All the best :).

-Raghav

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License