Sunday, 20 March 2016

DataStage Job Compilation error "Failed to invoke GenRuntime using phantom process helper".

 
           

 Data Stage compilation error "Failed to invoke GenRuntime using phantom process helper. "


If you are getting below error while compiling jobs in Data Stage and error in coming in all the projects.Job are not able to execute.

Failed to invoke GenRuntime using phantom process helper. 

Cause:One of the possible cause of this error may be that tmp directory is full.Go to location
            /opt/IBM/InformationServer and check the space occupied by folder tmp

Run below command to see the space occupied.

df -k /opt/IBM/InformationServer/tmp

It command will show you total space assigned to this folder,space occupied,space available and %use.

If your issue is due to space in tmp directory,You will see %use as 100%.That means there is no space available in tmp directory and your jobs are failing.

If you try to delete some unwanted files from tmp folder,%Use may not decrease less than 100%.It means that even after deleting the files,no space is being released.

Solution:

Data Stage restart may resolve this issue.

Your tmp folder  may be filled because of the bad jobs.

A bad jobs is a job that had an issue while running and caused the temp directory to become full.
This could happen again if the same program is run.

Developer needs to fix the job to avoid this issue.

Friday, 18 March 2016

Data Stage Jobs are hanging after Upgrade/Installation

           

               Data stage jobs are hanging on new server (after upgrade)



 If you are facing below error-

After Data Stage upgrade or new installation, if your jobs are hanging during testing, hundreds of jobs run together but don’t get executed and get hanged. Jobs are taking a lot of time in execution.
This problem may be because the nproc value and max user processes variables don’t have enough values to execute the jobs.

[root@localhost# ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 127358
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65535
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65535
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

A decent value for max user processes should be 10K.Please check above values for dsadm account or the account which is running your Data stage server.

If open files and max user processes values are less then and get it fixed by Linux team.

Nproc values are saved in /etc/security/limits.conf .

If you are upgrading on a new server,You can verify these settings on your current server also.