Data stage jobs are hanging on new server (after upgrade)
If you are facing below error-
After Data Stage upgrade or new installation, if your jobs
are hanging during testing, hundreds of jobs run together but don’t get
executed and get hanged. Jobs are taking a lot of time in execution.
This problem may be because the nproc value and max user
processes variables don’t have enough values to execute the jobs.
[root@localhost# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 127358
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 65535
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 65535
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 127358
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 65535
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 65535
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
A decent value for max user processes should be 10K.Please
check above values for dsadm account or the account which is running your Data
stage server.
If open files and max user processes values are less then
and get it fixed by Linux team.
Nproc values are saved in /etc/security/limits.conf .
If you are upgrading on a new server,You can verify these settings on your current server also.
No comments:
Post a Comment