December 27, 2011

Alfresco (catalina.sh) Not starting at startup in Linux Server.

Our company using Alfresco and it has been installed through SVN. We are using the command catalina.sh start to start the alfresco. Our alfresco server is CentOS release 5.5 .

We need to manually start the alfresco on every reboot and it is very tedious.

To fix this issue we need to place the command (/opt/apache-tomcat-6.0.20/bin/catalina.sh start ) and the environment variables in system startup file (/etc/rc.d/rc.local) .

In our case we follow these steps:

vi /root/alfrescostart.sh

Copy and paste the below lines:
####################################################################################

export JAVA_OPTS="-Xmx1024M -XX:MaxPermSize=512M -Dcom.sun.management.jmxremote -Dalfresco.home=."
export JAVA_HOME=/usr/java/jdk1.6.0_29
export MVN_HOME=/opt/apache-maven-2.2.1
export ANT_HOME=/opt/apache-ant-1.8.2
export CATALINA_HOME=/opt/apache-tomcat-6.0.20
export TOMCAT_HOME=$CATALINA_HOME
export APP_TOMCAT_HOME=$CATALINA_HOME
export PATH=$JAVA_HOME/bin:$CATALINA_HOME/bin:$MVN_HOME/bin:$ANT_HOME/bin:$PATH
#Command to Start the catalina at startup
/opt/apache-tomcat-6.0.20/bin/catalina.sh start

####################################################################################

chmod +x /root/alfrescostart.sh

Add the line “/root/alfrescostart.sh” line in /etc/rc.d/rc.local


Thats all!....

December 15, 2011

Drupal Integration with CAS for SSO

Drupal Integration with CAS for SSO

Step 1:

Requirements
============
PHP 5 with the following modules:
curl, openssl, dom, zlib, and xml

Download phpCAS from https://wiki.jasig.org/display/CASC/phpCAS
phpCAS version 1.0.0 or later.

There are several locations you can install the phpCAS library.

1. Module directory installation. This means installing the library folder
under the moduels directory, so that the file
sites//modules/cas/CAS/CAS.php exists.

2. System wide installation. See the phpCAS installation guide, currently at
https://wiki.jasig.org/display/CASC/phpCAS+installation+guide

3. Libraries API installation. Install and enable the Libraries API module,
available at http://drupal.org/project/libraries. Then extract phpCAS so
that sites//libraries/CAS/CAS.php exists. For example:
$ cd sites/all/libraries
$ curl http://downloads.jasig.org/cas-clients/php/current.tgz | tar xz
$ mv CAS-* CAS

Step 2:

Download respective CAS Module from the link: http://drupal.org/project/cas

Place the cas folder in your Drupal modules directory.

Step 3:

Configuring CAS

Navigate to the CAS module configuration page at
Admin >> Users >> CAS Settings (D6)
Admin >> Configuration >> People >> CAS settings (D7)
Library (phpCAS)

If phpCAS has been successfully installed, the version number of phpCAS will be displayed. Otherwise information is provided to help you install and configure phpCAS.
CAS Server

Enter in the CAS server location. For example, if the CAS server is at https://www.example.com/cas, enter
Hostname or IP Address: www.example.com
Port: 443
URI: /cas

For extra security, you may also provide the PEM Certificate of the Certificate Authority which issued the certificate of the CAS server.
Login Form

These settings control how users may log into CAS using the user login form, either as displayed in a block or at user/login. Many installations will choose "Add link to login forms" or "Make CAS login default on login forms."

Additionally, the phrases used on the login forms may be customized for your particular brand.
User Accounts

Each CAS user must have a Drupal account before they can log in. By default, the administrator must create the account and then assign the CAS username to the account.

Selecting "Automatically create Drupal accounts" allows the administrator to skip pre-creating Drupal accounts and instead have Drupal accounts automatically be created when a CAS user first logs in.

By default, the Drupal account will be created with a bare minimum of information:
Name: CAS username
E-mail: empty
Roles: authenticated user
Password: A random string which is not displayed to the user

The e-mail address field may be populated if the e-mail addresses follow a predictable pattern based upon the CAS username — for example username@example.com.

Additional roles may also be assigned to all CAS users. These roles will be reassigned every time a CAS user logs in. Deselecting an option will not take away that role from any existing user.

The "Users cannot change email address" and "Users cannot change password" options control the user edit form when a user has logged in with CAS.
Redirection

The "Check with the CAS server to see if the user is already logged in?" option implements the Gateway feature of the CAS protocol. When a user visit the site, they will be redirected to the CAS Server with the parameter gateway=true. If the user is already authenticated with the CAS server, they will be automatically logged in. If not, they will be silently redirected back to the Drupal site without being prompted for their password. This check is performed only once for users with cookies enabled. Beware: there might be some negative interactions with this feature and various caching configurations.

The "Require CAS login for" options prompt for CAS authentication for anonymous users when visiting the specified pages. Users already authenticated with Drupal, even if they did not log in with CAS, will not be redirected to the CAS login server.

For example, when configuring CAS with OpenScholar, one could add site/register to the list of pages to require CAS login for.
Login/Logout Destinations

You may configure a special page for users to be redirected to the first time they log in to the CAS site. For example, you may wish to write an introductory page which all users should be required to see once. Or as above you may wish for users to be redirected to site/register in an OpenScholar installation.

A logout destination may be provided if you want your users to be directed to a certain page when they log out of CAS. This is not the CAS server's URL, but rather a page on your site you would like the users to be directed to by the CAS server.

Users are redirected to the "Change password URL," if provided, when they visit user/password ("Request a new password").

Users are redirected to the "Registration URL," if provided, when they visit user/register ("Create a new account").



Note:

In Drupal 6.16 or Version below 6.22. CAS will not work properly. You will get an error in server error log as follows:

PHP Fatal error: Call to undefined function user_login_destination() in ../modules/cas/cas.module

To fix this error:

Add a customized form module in ../sites/all/modules/
For eg: ../sites/all/modules/sample_forms.module

Now paste the below lines in sample_forms.module
#######################################################
function user_login_destination() {
$destination = drupal_get_destination();
return $destination == 'destination=user%2Flogin' ? 'destination=user' : $destination;
}
#######################################################

November 23, 2011

MySQL Replication cluster

MySQL Replication cluster

This guide is designed to help do the initial setup on a MySQL cluster in which multiple MySQL servers all serve the same content through the use of the replication function. We have successfully deployed this solution for multiple clients and it is a very good option for those needing a more powerful mysql solution.

Be sure your mysql servers are running the same version before starting this guide, yes, is possible to have a few combinations of master-slave versions, for more information about this you can check:

http://dev.mysql.com/doc/refman/4.1/en/replication-compatibility.html


1 - Write down which is the setup you are going to do, which server is master and which server/s will be slave.

2 - Select your username/password for replications accounts. You can have one per server if you want, or one for all the mysql network.

3 - mysql> GRANT REPLICATION SLAVE ON *.*

TO 'USERNAME'@'IPFROMTHESLAVE' IDENTIFIED BY 'PASSWORD';

Username: mysql username
IPfromtheslave: ip from the mysql server that will be the one replicating the master db.
PASSWORD: the password for the replicator account.

Just a few side notes.

a) None of the passwords need to be root passwords.
b) Is not recomend to use only 1 user for replication in all the network.

4) In the master server you need to Flush all the tables, this will prevent clients from writing the db so it will keep without change while we copy over.

mysql> FLUSH TABLES WITH READ LOCK;

5) Make sure that the [mysqld] section of the my.cnf file on the master host includes a log-bin option. The section should also have a server-id=master_id option, where master_id must be a positive integer value from 1 to 232 – 1. For example:

[mysqld]

log-bin=mysql-bin

server-id=1

6) Login using another ssh client to the master server and lets create a snapshot.

mkdir /home/slave_db
rsync -vrplogDtH /var/lib/mysql /home/slave_db

You may not want to replicate the mysql database if the slave server has a different set of user accounts from those that exist on the master. In this case, you should exclude it from the archive. When the rsync is finish, just login inside mysql and type:

SHOW MASTER STATUS;

Save this info in a txt file inside the slave_db folder that we will use them laster. After you finish doing this, you can reenable the activity on the master: UNLOCK TABLES;

7) Stop the server that is to be used as a slave server and add the following to its my.cnf file:

[mysqld]

server-id=slave_id

The slave_id value, like the master_id value, must be a positive integer value from 1 to 232 – 1. In addition, it is very important that the ID of the slave be different from the ID of the master. For example:

[mysqld]

server-id=2

Remember that server-id must be unique in all the mysql network.

8) Copy the files over from the slave_db folder to the remote location. You can do this doing the following command:

rsync -e ssh -avz /home/slave_db/ root@REMOTESERVER:/var/lib/mysql

Check that all the permitions and correctly in the /var/lib/mysql folder.Remember files must be own by mysql:mysql



9) Start Mysql and enter to it, write the following changing the values that are needed:

mysql> CHANGE MASTER TO

-> MASTER_HOST='master_host_name',

-> MASTER_USER='replication_user_name',

-> MASTER_PASSWORD='replication_password',

-> MASTER_LOG_FILE='recorded_log_file_name',

-> MASTER_LOG_POS=recorded_log_position;



10) type: START SLAVE;

Mirror Your Web Site With rsync

This tutorial shows how you can mirror your web site from your main web server to a backup server that can take over if the main server fails. We use the tool rsync for this, and we make it run through a cron job that checks every x minutes if there is something to update on the mirror. Thus your backup server should usually be up to date if it has to take over.

rsync updates only files that have changed, so you do not need to transfer 5 GB of data whenever you run rsync. It only mirrors new/changed files, and it can also delete files from the mirror that have been deleted on the main server. In addition to that it can preserve permissions and ownerships of mirrored files and directories; to preserve the ownerships, we need to run rsync as root which is what we do here. If permissions and/or ownerships change on the main server, rsync will also change them on the backup server.

In this tutorial we will tunnel rsync through SSH which is more secure; it also means you do not have to open another port in your firewall for rsync - it is enough if port 22 (SSH) is open. The problem is that SSH requires a password for logging in which is not good if you want to run rsync as a cron job. The need for a password requires human interaction which is not what we want.

But fortunately there is a solution: the use of public keys. We create a pair of keys (on our backup server mirror.example.com), one of which is saved in a file on the remote system (server1.example.com). Afterwards we will not be prompted for a password anymore when we run rsync. This also includes cron jobs which is exactly what we want.

As you might have guessed already from what I have written so far, the concept is that we initiate the mirroring of server1.example.com directly from mirror.example.com; server1.example.com does not have to do anything to get mirrored.

I will use the following setup here:

* Main server: server1.example.com (server1) - IP address: 192.168.0.100
* Mirror/backup server: mirror.example.com (mirror) - IP address: 192.168.0.175
* The web site that is to be mirrored is in /var/www on server1.example.com.

rsync is for mirroring files and directories only; if you want to mirror your MySQL database, please take a look at these tutorials:

* How To Set Up Database Replication In MySQL
* How To Set Up A Load-Balanced MySQL Cluster

I want to say first that this is not the only way of setting up such a system. There are many ways of achieving this goal but this is the way I take. I do not issue any guarantee that this will work for you!

1 Install rsync

First we have to install rsync on both server1.example.com and mirror.example.com. For Debian systems, this looks like this:

server1/mirror:

(We do this as root!)

apt-get install rsync

On other Linux distributions you would use yum (Fedora/CentOS) or yast (SuSE) to install rsync.

2 Create An Unprivileged User On server1.example.com

Now we create an unprivileged user called someuser on server1.example.com that will be used by rsync on mirror.example.com to mirror the directory /var/www (of course, someuser must have read permissions on /var/www on server1.example.com).

server1:

(We do this as root!)

useradd -d /home/someuser -m -s /bin/bash someuser

This will create the user someuser with the home directory /home/someuser and the login shell /bin/bash (it is important that someuser has a valid login shell - something like /bin/false does not work!). Now give someuser a password:

passwd someuser

3 Test rsync

Next we test rsync on mirror.example.com. As root we do this:

mirror:

rsync -avz -e ssh someuser@server1.example.com:/var/www/ /var/www/

You should see something like this. Answer with yes:

The authenticity of host 'server1.example.com (192.168.0.100)' can't be established.
RSA key fingerprint is 32:e5:79:8e:5f:5a:25:a9:f1:0d:ef:be:5b:a6:a6:23.
Are you sure you want to continue connecting (yes/no)?

<-- yes Then enter someuser's password, and you should see that server1.example.com's /var/www directory is mirrored to /var/www on mirror.example.com. You can check that like this on both servers: server1/mirror: ls -la /var/www You should see that all files and directories have been mirrored to mirror.example.com, and the files and directories should have the same permissions/ownerships as on server1.example.com. 4 Create The Keys On mirror.example.com Now we create the private/public key pair on mirror.example.com: mirror: (We do this as root!) mkdir /root/rsync ssh-keygen -t dsa -b 2048 -f /root/rsync/mirror-rsync-key You will see something like this: Generating public/private dsa key pair. Enter passphrase (empty for no passphrase): [press enter here] Enter same passphrase again: [press enter here] Your identification has been saved in /root/cron/mirror-rsync-key. Your public key has been saved in /root/cron/mirror-rsync-key.pub. The key fingerprint is: 68:95:35:44:91:f1:45:a4:af:3f:69:2a:ea:c5:4e:d7 root@mirror It is important that you do not enter a passphrase otherwise the mirroring will not work without human interaction so simply hit enter! Next, we copy our public key to server1.example.com: mirror: (Still, we do this as root.) scp /root/rsync/mirror-rsync-key.pub someuser@server1.example.com:/home/someuser/ The public key mirror-rsync-key.pub should now be available in /home/someuser on server1.example.com. 5 Configure server1.example.com Now log in through SSH on server1.example.com as someuser (not root!) and do this: server1: (Please do this as someuser!) mkdir ~/.ssh chmod 700 ~/.ssh mv ~/mirror-rsync-key.pub ~/.ssh/ cd ~/.ssh touch authorized_keys chmod 600 authorized_keys cat mirror-rsync-key.pub >> authorized_keys

By doing this, we have appended the contents of mirror-rsync-key.pub to the file /home/someuser/.ssh/authorized_keys. /home/someuser/.ssh/authorized_keys should look similar to this:

server1:

(Still as someuser!)

vi /home/someuser/.ssh/authorized_keys

ssh-dss AAAAB3NzaC1kc3MAAA[...]lSUom root@
mirror

Now we want to allow connections only from mirror.example.com, and the connecting user should be allowed to use only rsync, so we add

command="/home/someuser/rsync/checkrsync",from="mirror.example.com",no-port-forwarding,no-X11-forwarding,no-pty

right at the beginning of /home/someuser/.ssh/authorized_keys:

server1:

(Still as someuser!)

vi /home/someuser/.ssh/authorized_keys

command="/home/someuser/rsync/checkrsync",from="mirror.example.com",no-port-forwarding,no-X11-forwarding,no-pty ssh-dss AAAAB3NzaC1kc3MAAA[...]lSUom root@
mirror

It is important that you use a FQDN like mirror.example.com instead of an IP address after from=, otherwise the automated mirroring will not work!

Now we create the script /home/someuser/rsync/checkrsync that rejects all commands except rsync.

server1:

(We still do this as someuser!)

mkdir ~/rsync
vi ~/rsync/checkrsync

#!/bin/sh

case "$SSH_ORIGINAL_COMMAND" in
*\&*)
echo "Rejected"
;;
*\(*)
echo "Rejected"
;;
*\{*)
echo "Rejected"
;;
*\;*)
echo "Rejected"
;;
*\<*)
echo "Rejected"
;;
*\`*)
echo "Rejected"
;;
rsync\ --server*)
$SSH_ORIGINAL_COMMAND
;;
*)
echo "Rejected"
;;
esac

chmod 700 ~/rsync/checkrsync


6 Test rsync On mirror.example.com

Now we must test on mirror.example.com if we can mirror server1.example.com without being prompted for someuser's password. We do this:

mirror:

(We do this as root!)

rsync -avz --delete --exclude=**/stats --exclude=**/error --exclude=**/files/pictures -e "ssh -i /root/rsync/mirror-rsync-key" someuser@server1.example.com:/var/www/ /var/www/

(The --delete option means that files that have been deleted on server1.example.com should also be deleted on mirror.example.com. The --exclude option means that these files/directories should not be mirrored; e.g. --exclude=**/error means "do not mirror /var/www/error". You can use multiple --exclude options. I have listed these options as examples; you can adjust the command to your needs. Have a look at
man rsync

for more information.)

You should now see that the mirroring takes place:

receiving file list ... done

sent 71 bytes received 643 bytes 476.00 bytes/sec
total size is 64657 speedup is 90.56

without being prompted for a password! This is what we wanted.


7 Create A Cron Job

We want to automate the mirroring, that is why we create a cron job for it on mirror.example.com. Run crontab -e as root:

mirror:

(We do this as root!)

crontab -e

and create a cron job like this:

*/5 * * * * /usr/bin/rsync -azq --delete --exclude=**/stats --exclude=**/error --exclude=**/files/pictures -e "ssh -i /root/rsync/mirror-rsync-key" someuser@server1.example.com:/var/www/ /var/www/

This would run rsync every 5 minutes; adjust it to your needs (see

man 5 crontab

). I use the full path to rsync here (/usr/bin/rsync) just to go sure that cron knows where to find rsync. Your rsync location might differ. Run

mirror:

(We do this as root!)

which rsync

to find out where yours is.


8 Links

* rsync: http://samba.anu.edu.au/rsync

Logout root automatically when inactive

Logout root automatically when inactive

Generraly, administrators will stay login as “root” or forget to logout after finishing their work and leave their terminals unattended.

The answer to solve this problem is to make the bash shell automatically logout after not being used for a period of time. For that, you must set the special variable named “TMOUT” to the time in seconds of no input before logout.

Edit your profile file (vi /etc/profile) and add the following line somewhere after the line that read “HISTSIZE=” on this file:

Code:

TMOUT=7200


The value we enter for the variable “TMOUT=” is in seconds and represents 2 hours (60 * 60 = 3600 * 2 = 7200 seconds). It is important to note that if you decide to put the above line in your /etc/profile file, then the automatic logout after two hours of inactivity will apply for all users on the system. So, instead, if your prefer to control which users will be automatically logged out and which ones are not, you can set this variable in their individual .bashrc file.

After this parameter has been set on your system, you must logout and login again (as root) for the change to take effect.

Linux Shell Script to reboot DSL or ADSL router.

Linux Shell Script to reboot DSL or ADSL router

If you need to reboot the router then you need to use web interface or telnet interface. Both methods take time, especially if you are playing with ACL, NAT or router firewall or you just wanna reboot the router from your Linux desktop. I have created simple script using expect tool to reboot router. Make sure you have expect command installed. Use rpm or apt-get command to install expect tool.
Shell script

Create a script as follows (tested on Beetel ADSL 220x router):

#!/usr/bin/expect -f

set timeout 20

# router user name
set name "admin"

# router password
set pass "PASSWORD"

# router IP address
set routerip "192.168.1.254"

# Read command as arg to this script
set routercmd [lindex $argv 0]

# start telnet
spawn telnet $routerip

# send username & password
expect "Login:"
send -- "$name\r"
expect "Password:"
send -- "$pass\r"

# get out of ISP's Stupid menu program, go to shell
expect " -> "
send -- "sh\r"

# execute command
expect "# "
send -- "$routercmd\r"
# exit
send -- "^D"

Save script and setup executable permission on it:
$ chmod +x router.exp

How do I run this script?


You need to pass command to script to execute on a router. For example to display router uptime, interface information and to reboot router you need to type command as follows:
$ ./router.exp uptime
$ ./router.exp ifconfig
$ ./router.exp reboot

Since my ISP router offers menu as soon as login above script may not work on generic router such as Cisco or linksys router. Therefore, you may need to modify above script to work with your router. If you are a new to expect then use autoexpect command to generate script. It watches you interacting with another program and creates an Expect script that reproduces your interactions For straightline scripts, autoexpect saves substantial time over writing scripts by hand. Even if you are an Expect expert, you will find it convenient to use autoexpect to automate the more mindless parts of interactions. It is much easier to cut/paste hunks of autoexpect scripts together than to write them from scratch. Moreover, if you are a beginner, you may be able to get away with learning nothing more about Expect than how to call autoexpect. Just type autoexecpt:
$ autoexpectautoexpect started, file is script.exp

Next type telnet command (telnet to the router):
$ telnet 192.168.1.254
Output:

Login: USER
Password: Password

Now type commands on the router:
$ ifconfig
$ exit
You are done, type exit to stop autoexepct command:
$ exit
Output:

autoexpect done, file is script.exp

Just type ./script.exp to run ifconfig command:
$ ./script.exp

You can now modify script.exp to reboot or to run other commands. It is a real lifesaver.

How to Prevent DDoS Attack

How to Prevent DDoS Attack

All web servers been connected to the Internet subjected to DoS (Denial of Service) or DDoS (Distrubuted Denial of Service) attacks in some kind or another, where hackers or attackers launch large amount connections consistently and persistently to the server, and in advanced stage, distributed from multiple IP addresses or sources, in the hope to bring down the server or use up all network bandwidth and system resources to deny web pages serving or website not responding to legitimate visitors.

You can detect the ddos using the following command

netstat -anp|grep tcp|awk '{print $5}'| cut -d : -f1|sort|uniq -c|sort -n

It will shows the number of connections from all IPs to the server.

There are plenty of ways to prevent, stop, fight and kill off DDoS attack, such as using firewall. A low cost, and probably free method is by using software based firewall or filtering service. (D)DoS-Deflate is a free open source Unix/Linux script by MediaLayer that automatically mitigate (D)DoS attacks. It claims to be the best, free, open source solution to protect servers against some of the most excruciating DDoS attacks.

(D)DoS-Deflate script basically monitors and tracks the IP addresses are sending and establishing large amount of TCP network connections such as mass emailing, DoS pings, HTTP requests) by using netstat command, which is the symptom of a denial of service attack. When it detects number of connections from a single node that exceeds certain preset limit, the script will automatically uses APF or IPTABLES to ban and block the IPs. Depending on the configuration, the banned IP addresses would be unbanned using APF or IPTABLES (only works on APF v 0.96 or better).

Installation and setup of (D)DOS-Deflate on the server is extremely easy. Simply login as root by open SSH secure shell access to the server, and run the the following commands one by one:

wget http://www.inetbase.com/scripts/ddos/install.sh
chmod 0700 install.sh
./install.sh

To uninstall the (D)DOS-Deflate, run the following commands one by one instead:

wget http://www.inetbase.com/scripts/ddos/uninstall.ddos
chmod 0700 uninstall.ddos
./uninstall.ddos

The configuration file for (D)DOS-Deflate is ddos.conf, and by default it will have the following values:

Code:
FREQ=1
NO_OF_CONNECTIONS=50
APF_BAN=1
KILL=1
EMAIL_TO=”root”
BAN_PERIOD=600


Users can change any of these settings to suit the different need or usage pattern of different servers. It’s also possible to whitelist and permanently unblock (never ban) IP addresses by listing them in /usr/local/ddos/ignore.ip.list file. If you plan to execute and run the script interactively, users can set KILL=0 so that any bad IPs detected are not banned.

How to increase the memory limit of php

How to increase the memory limit of php

If you have seen an error like “Fatal Error: PHP Allowed Memory Size Exhausted” in apache logs or in your browser, this means that PHP has exhausted the maximum memory limit. This post will show 3 different ways on how you can increase the php memory limit and also explain when you should use them.

First, let’s see where is this limit coming from. Normally you will see from the error message what is the actual limit, as this will look like:

"PHP Fatal error: Allowed memory size of X bytes exhausted (tried to allocate Y) in whatever.php"

The default value might differ depending on what php version and linux distribution you are running, but normally this will be set to either 8M or 16M. For example on my debian etch, running on php 5.2 this is set by default at 16M.

In order to identify the current value on your system, look inside your php.ini and search for memory_limit:
memory_limit = 16M ; Maximum amount of memory a script may consume (16MB)

There are three ways to change this value, the obvious way - changing the global value from php.ini, but also an individual method to change it just for a script, or folder.

1. Changing memory_limit globally from php.ini

This is the simplest and most obvious method. You just edit your php.ini and change the memory_limit to whatever you need. For ex:

memory_limit = 32M

You will require access to make changes to php.ini on the system. This change is global and will be used by all php scripts running on the system. Once you change this value, you will need to restart the web server in order for it to become active.

Keep in mind that this limit has its logic and don’t increase it artificially, as poorly written php scripts might overkill your system without proper limits.
Note: if you know what you are doing and want to remove the memory limit, you would set this value to -1.

2. Changing memory_limit using .htaccess for a single folder/vhost


Changing the global memory_limit might not be a good idea, and you might be better changing this only inside one folder (normally one application or virtual host) that needs this value changed for its functionality. To do this you have to add to the respective location .htaccess something like:

php_value memory_limit 64M

This change will be local only, and can be useful for webmasters that don’t have control on the system php.ini. This change would not require a reload and will become active immediately.

3. Changing memory_limit inside a single php script.

For even more control you can set this directive inside a single php script. To do so you would use in your code:

ini_set('memory_limit', '64M');

The advantage of this method is that you have more control and set this value just where you know it is really needed. Also it can be done without having access to the system php.ini, and will become active immediately.

Note: in order to be able to use these PHP resource limits, your PHP version must have been compiled with the –enable-memory-limit configure option. Normally most packed versions will have this, but just in case if this doesn't work for you as expected, check on how php was compiled first.

Deny users and groups in Openssh

OpenSSH has two directives for allowing and denying ssh user access.

DenyUsers user1 user2 user3

Use to block user login. You can use wild cards as well as user1@somedomain.com (user1 is not allowed to login from somedomain.com host) pattern.

DenyGroups group1 group2
A list of group names, if user is part of primary of supplementary group login access is denied. You can use wildcards.

Please note that you cannot use a numeric group or username ID. If these directives are not used, default is to allow everyone.

AllowUsers user1 user2
This directive is opposite of DenyUsers directive.

AllowGroups group1 group2
This directive is opposite of DenyGroups directive.

You should always block access to root user/group:
Open /etc/ssh/sshd_config file:

# vi /etc/ssh/sshd_config

Append following names (directives):

DenyUsers root finadmin
DenyGroups root finadmin

Make sure at least one user is allowed to use 'su -' command.

Save the file and restart the sshd.

This is a secure setup and you are restricting the users allowed to access the system via SSH with four above directives.

Automatic Login using expect tool and ssh

Automatic Login using expect tool and ssh

In order to save time it is possible to save the login information to a file and then use the expect tool to login to a server.

Before you proceed with this,make sure that expect tool is installed.

else install expect tool

#yum install expect

Now save the login information to a file. Take a look at the below example.

vi server1

#!/usr/bin/expect -f
spawn ssh root@192.168.0.254
expect "password:"
send "password\r"
expect "#"
interact

save the file and exit. Make sure you replace the password in send "password\r" with the real password leaving \r alone. Else you will have to press enter to login to the server

Now use the command to login to the server 192.168.0.254

expect server1

Thats it you have logged in. No password nothing.

SSO of Alfresco with CAS

Notes for

Installation and configuration of Alfresco
Installation of CAS
Integration of Alfresco Explorer and Share with CAS
SSO


You can mail me on pcgeopc@gmail.com

November 22, 2011

Drush

drush” is a command line shell and scripting interface for Drupal, a veritable Swiss Army knife designed to make life easier for those of us who spend some of our working hours hacking away at the command prompt. In general
• drush is a command line shell and scripting interface for Drupal.
• drush is not a module
• It is valid to use the latest '7.x' (or master) no matter what your version of Drupal is. Drush is independent of Drupal version

Installation:

1. Untar the tarball into a folder outside of your web site (/path/to/drush)
2. Make the 'drush' command executable:
$ chmod u+x /path/to/drush/drush
3. (Optional, but recommended:) To ease the use of drush,
- create a link to drush in a directory that is in your PATH, e.g.:
$ ln -s /path/to/drush/drush /usr/local/bin/drush
NOTE ON PHP.INI FILES
Usually, php is configured to use separate php.ini files for the web server and the command line. To see which php.ini file drush is using, run:
$ drush status
Compare the php.ini that drush is using with the php.ini that the webserver is using. Make sure that drush's php.ini is given as much memory to work with asthe web server is; otherwise, Drupal might run out of memory when drush bootstraps it.
Drush requires a fairly unrestricted php environment to run in. In particular, you should insure that safe_mode, open_basedir, disable_functions and disable_classes are empty.
If drush is using the same php.ini file as the web server, you can create a php.ini file exclusively for drush by copying your web server's php.ini file to the folder $HOME/.drush or the folder /etc/drush. Then you may edit this file and change the settings described above without affecting the php enviornment of your web server.

4. Start using drush by running "drush" from your Drupal root directory.

Drush Commands:
You can find the drush commands from the url: http://drush.ws/

September 12, 2011

Puppet script for installing Apache, Mysql & PHP (LAMP) for all Linux operating systems.

Click here for the link to have a generalized puppet script for installing Apache, Mysql and PHP on any Linux Operating Systems.

For git hub url click here or git@github.com:geopcgeo/LAMP.git

August 25, 2011

Optimizing Mysql

Optimizing MySQL
mysql_fix_privilege_tables
mysqlcheck -o –all-databases

Open /etc/my.cnf
[mysqld]
max_connections=500
safe-show-database
query_cache_limit=1M
query_cache_size=32M
query_cache_type=1
key_buffer_size=256M
table_cache=150
thread_cache_size=200
datadir = /var/lib/mysql
socket = /var/lib/mysql/mysql.sock

[mysql.server]
user = mysql
basedir = /var/lib

[safe_mysqld]
err-log = /var/log/mysqld.log
pid-file = /var/run/mysqld/mysqld.pid

Below are notes on some of the important variables, I took down while tuning the config file.

1. QUERY CACHE
query_cache_size:
* MySQL 4 provides one feature that can prove very handy - a query cache. In a situation where the database has
to repeatedly run the same queries on the same data set, returning the same results each time, MySQL can cache the result
set, avoiding the overhead of running through the data over and over and is extremely helpful on busy servers.

query_cache_limit : a total size of memory that server utilizes for caching results of queries

query_cache_type : 0 - Off
1 - Cache all query results except for those that begin with SELECT SQL_NO_CACHE
2 - Cache results only for queries that begin with SELECT SQL_CACHE.

2. key_buffer_size:
* The value of key_buffer_size is the size of the buffer used with indexes. The larger the buffer, the faster
the SQL command will finish and a result will be returned. Ideally, it will be large enough to contain
all the indexes (the total size of all .MYI files on the server).
Using a value that is 25% of total memory on a machine that mainly runs MySQL is quite common.

The Key_reads/Key_read_requests ratio < 0.01
The Key_writes/Key_write_requests ratio ~ 1

SHOW STATUS;

3. LOG
log : Whether logging of all statements to the general query log is enabled. See Section 5.3.2, “The General Query Log”.
log_error : The location of the error log. This variable was added in MySQL 4.0.10.
log_slow_queries : Whether slow queries should be logged. “Slow” is determined by the value of the long_query_time
variable.

4. table_cache:
* The default is 64. Each time MySQL accesses a table, it places it in the cache. If the system accesses many
tables, it is faster to have these in the cache. MySQL, being multi-threaded, may be running many queries on the table at
one time, and each of these will open a table. Examine the value of open_tables at peak times. If you find it stays at the
same value as your table_cache value, and then the number of opened_tables starts rapidly increasing, you should increase
the table_cache if you have enough memory.

check for Open_tables
SHOW STATUS;

5. sort_buffer_size: Each thread that needs to do a sort allocates a buffer of this size. Increase this value for faster
ORDER BY or GROUP BY operations.

6. read_rnd_buffer_size:
* The read_rnd_buffer_size is used after a sort, when reading rows in sorted order. If you use many queries with
ORDER BY, upping this can improve performance. Remember that, unlike key_buffer_size and table_cache, this buffer is
allocated for each thread. This variable was renamed from record_rnd_buffer in MySQL 4.0.3. It defaults to the same size as
the read_buffer_size. A rule-of-thumb is to allocate 1KB for each 1MB of memory on the server, for example 1MB on a machine
with 1GB memory.

7. thread_cache_size:
* If you have a busy server that’s getting a lot of quick connections, set your thread cache high enough that the
Threads_created value in SHOW STATUS stops increasing. This should take some of the load off of the CPU.

8. tmp_table_size:
* “Created_tmp_disk_tables” are the number of implicit temporary tables on disk created while executing statements
and “created_tmp_tables” are memory-based. Obviously it is bad if you have to go to disk instead of memory all the time.

Increase the value of tmp_table_size if you do many advanced GROUP BY queries and you have lots of memory.
This variable does not apply to user-created MEMORY tables.

9. innodb_buffer_pool_size
While the key_buffer_size is the variable to target for MyISAM tables, for InnoDB tables, it is innodb_buffer_pool_size.
Again, you want this as high as possible to minimize slow disk usage. On a dedicated MySQL server running InnoDB tables,
you can set this up to 80% of the total available memory.

10. innodb_additional_mem_pool_size
This variable stores the internal data structure. Make sure it is big enough to store data about all your InnoDB tables
(you will see warnings in the error log if the server is using OS memory instead).

11. max_connections

12. wait_timeout=500
This variable determines the timeout in seconds before mysql will dump a connection. If set to low
you will likely receive mySQL server has gone away errors in your log, which in vBulletins case is quite common.

13. max_allowed_packet
The maximum size of one packet or any generated/intermediate string.
Again if set to low (the default is 8M) users will likely experience errors. 16M has always
worked fine for my production environments.

You can grab a mySQL performance script from the guys at hackmysql.com( http://hackmysql.com/mysqlreport ). I use it to
tell me how the database is performing under load. You can run this from any shell when you are loaded with traffic.
Nothing fancy but should give you an idea.

==================================
http://dev.mysql.com/doc/refman/4.1/en/server-system-variables.html
http://dev.mysql.com/doc/refman/4.1/en/server-status-variables.html

August 23, 2011

Installing Puppet

Installing Facter From Source

The facter library is a prerequisite for Puppet. Like Puppet, there are packages available for most platforms, though you may want to use the tarball if you would like to try a newer version or are using a platform without an OS package:

Get the latest tarball:

$ wget http://puppetlabs.com/downloads/facter/facter-1.6.0.tar.gz

Untar and install facter:

$ gzip -d -c facter-latest.tgz | tar xf -
$ cd facter-*
$ sudo ruby install.rb # or become root and run install.rb



Installing Puppet From Source

Using the same mechanism as Facter, install the puppet libraries and executables:

# get the latest tarball
$ wget http://puppetlabs.com/downloads/puppet/puppet-latest.tgz
# untar and install it
$ gzip -d -c puppet-latest.tgz | tar xf -
$ cd puppet-*
$ sudo ruby install.rb # or become root and run install.rb

You can also check the source out from the git repo:

$ mkdir -p ~/git && cd ~/git
$ git clone git://github.com/puppetlabs/puppet
$ cd puppet
$ sudo ruby ./install.rb

To install into a different location you can use:

$ sudo ruby install.rb --bindir=/usr/bin --sbindir=/usr/sbin

Alternative Install Method: Using Ruby Gems

You can also install Facter and Puppet via gems:

$ wget http://puppetlabs.com/downloads/gems/facter-1.5.7.gem
$ sudo gem install facter-1.5.7.gem
$ wget http://puppetlabs.com/downloads/gems/puppet-0.25.1.gem
$ sudo gem install puppet-0.25.1.gem


August 22, 2011

Puppet

Why Puppet.

As system administrators acquire more and more systems to manage, automation of mundane tasks is increasingly important. Rather than develop in-house scripts, it is desirable to share a system that everyone can use, and invest in tools that can be used regardless of one’s employer. Certainly doing things manually doesn’t scale.

Puppet has been developed to help the sysadmin community move to building and sharing mature tools that avoid the duplication of everyone solving the same problem. It does so in two ways:

It provides a powerful framework to simplify the majority of the technical tasks that sysadmins need to perform
The sysadmin work is written as code in Puppet’s custom language which is shareable just like any other code.



Below are few links for Studying Puppet:


http://bitfieldconsulting.com/puppet-tutorial
http://bitfieldconsulting.com/puppet-tutorial-2


http://projects.puppetlabs.com/projects/1/wiki/Using_Stored_Configuration

http://www.agileweboperations.com/configuration-management-introduction-to-puppet

http://bitfieldconsulting.com/puppet-and-mysql-create-databases-and-users

http://www.how2centos.com/installing-puppet-dashboard-on-centos-5-5/



http://docs.puppetlabs.com/

http://docs.puppetlabs.com/guides/installation.html
http://docs.puppetlabs.com/guides/configuring.html
http://docs.puppetlabs.com/guides/language_guide.html
http://docs.puppetlabs.com/guides/tools.html


August 9, 2011

Linux using my RAM

Q. How do I find out what process are eating up all my memory. Is it possible to find out how long that memory has been allocated to particular process? How do I kill that process to free up memory?

A. You need to use the top command which provides a dynamic real-time view of a running system. It can display system summary information as well as a list of tasks currently being managed by the Linux kernel.
Simply type top command:

# top

top command will tell you the percentage of physical memory a particular process is using at any given time. As far as I know, there is no easy way that can tell how long that memory has been allocated.
You can also use ps command to get more information about process.

# ps aux | less

To kill process uses kill command

'free' and /proc
The 'free' command shows the memory on a machine, in certain categories.
[need explanation of categories here...'man free' doesn't explain the numbers]

$ free
total used free shared buffers cached
Mem: 507564 481560 26004 0 68888 185220
-/+ buffers/cache: 227452 280112
Swap: 2136604 105168 2031436

This information is obtained from /proc/meminfo, which has additional details not shown by the 'free' command.
The following is on my machine with 512 Mb RAM, running Linux 2.6.3:

$ cat /proc/meminfo

MemTotal: 507564 kB
MemFree: 26004 kB
Buffers: 68888 kB
Cached: 185220 kB
SwapCached: 29348 kB
Active: 342488 kB
Inactive: 32092 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 507564 kB
LowFree: 26004 kB
SwapTotal: 2136604 kB
SwapFree: 2031436 kB
Dirty: 88 kB
Writeback: 0 kB
Mapped: 165648 kB
Slab: 73212 kB
Committed_AS: 343172 kB
PageTables: 2644 kB
VmallocTotal: 524212 kB
VmallocUsed: 7692 kB
VmallocChunk: 516328 kB


meminfo:

Provides information about distribution and utilization of memory. This
varies by architecture and compile options. The following is from a
16GB PIII, which has highmem enabled. You may not have all of these fields.

> cat /proc/meminfo

MemTotal: 16344972 kB
MemFree: 13634064 kB
Buffers: 3656 kB
Cached: 1195708 kB
SwapCached: 0 kB
Active: 891636 kB
Inactive: 1077224 kB
HighTotal: 15597528 kB
HighFree: 13629632 kB
LowTotal: 747444 kB
LowFree: 4432 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 968 kB
Writeback: 0 kB
Mapped: 280372 kB
Slab: 684068 kB
Committed_AS: 1576424 kB
PageTables: 24448 kB
ReverseMaps: 1080904
VmallocTotal: 112216 kB
VmallocUsed: 428 kB
VmallocChunk: 111088 kB

MemTotal: Total usable ram (i.e. physical ram minus a few reserved
bits and the kernel binary code)
MemFree: The sum of LowFree+HighFree
Buffers: Relatively temporary storage for raw disk blocks
shouldn't get tremendously large (20MB or so)
Cached: in-memory cache for files read from the disk (the
pagecache). Doesn't include SwapCached
SwapCached: Memory that once was swapped out, is swapped back in but
still also is in the swapfile (if memory is needed it
doesn't need to be swapped out AGAIN because it is already
in the swapfile. This saves I/O)
Active: Memory that has been used more recently and usually not
reclaimed unless absolutely necessary.
Inactive: Memory which has been less recently used. It is more
eligible to be reclaimed for other purposes
HighTotal:
HighFree: Highmem is all memory above ~860MB of physical memory
Highmem areas are for use by userspace programs, or
for the pagecache. The kernel must use tricks to access
this memory, making it slower to access than lowmem.
LowTotal:
LowFree: Lowmem is memory which can be used for everything that
highmem can be used for, but it is also availble for the
kernel's use for its own data structures. Among many
other things, it is where everything from the Slab is
allocated. Bad things happen when you're out of lowmem.
SwapTotal: total amount of swap space available
SwapFree: Memory which has been evicted from RAM, and is temporarily
on the disk
Dirty: Memory which is waiting to get written back to the disk
Writeback: Memory which is actively being written back to the disk
Mapped: files which have been mmaped, such as libraries
Slab: in-kernel data structures cache
Committed_AS: An estimate of how much RAM you would need to make a
99.99% guarantee that there never is OOM (out of memory)
for this workload. Normally the kernel will overcommit
memory. That means, say you do a 1GB malloc, nothing
happens, really. Only when you start USING that malloc
memory you will get real memory on demand, and just as
much as you use. So you sort of take a mortgage and hope
the bank doesn't go bust. Other cases might include when
you mmap a file that's shared only when you write to it
and you get a private copy of that data. While it normally
is shared between processes. The Committed_AS is a
guesstimate of how much RAM/swap you would need
worst-case.
PageTables: amount of memory dedicated to the lowest level of page
tables.
ReverseMaps: number of reverse mappings performed
VmallocTotal: total size of vmalloc memory area
VmallocUsed: amount of vmalloc area which is used
VmallocChunk: largest contigious block of vmalloc area which is free


This command will list all of your processes sorted by memory usage:

ps -eo pmem,pcpu,rss,vsize,args | sort -k 1 -r | more

The first column shows the percentage of memory used by the process. You can use this information to find out which process is using the most.


ps -A --sort -rss -o comm,pmem | head -n 11

ps -A --sort -rss -o pid,comm,pmem,rss

This will give you the 10 processes using the most ram

lsb_release -a && free -m

From this command you get information of version of linux , ram,and memory status information.




Use a tool called pmap. It reports the memory map of a process or processes.
pmap examples

To display process mappings, type
$ pmap pid
$ pmap 3724
Output:
3724: /usr/sbin/lighttpd -f /etc/lighttpd/lighttpd.conf
0000000000400000 164K r-x-- /usr/sbin/lighttpd
0000000000629000 12K rw--- /usr/sbin/lighttpd
000000000bb6b000 4240K rw--- [ anon ]
00000035ee600000 104K r-x-- /lib64/ld-2.5.so
00000035ee819000 4K r---- /lib64/ld-2.5.so
00000035ee81a000 4K rw--- /lib64/ld-2.5.so
00000035eea00000 1304K r-x-- /lib64/libc-2.5.so
00000035eeb46000 2048K ----- /lib64/libc-2.5.so
00000035eed46000 16K r---- /lib64/libc-2.5.so
00000035eed4a000 4K rw--- /lib64/libc-2.5.so
00000035eed4b000 20K rw--- [ anon ]
00000035eee00000 8K r-x-- /lib64/libdl-2.5.so
00000035eee02000 2048K ----- /lib64/libdl-2.5.so
.....
....
00002aaaac51e000 4K r---- /lib64/libnss_files-2.5.so
00002aaaac51f000 4K rw--- /lib64/libnss_files-2.5.so
00007fff7143b000 84K rw--- [ stack ]
ffffffffff600000 8192K ----- [ anon ]
total 75180K
The -x option can be used to provide information about the memory allocation and mapping types per mapping. The amount of resident, non-shared anonymous, and locked memory is shown for each mapping:
pmap -x 3526


Clearing the disk cache

For experimentation, it's very convenient to be able to drop the disk cache. For this, we can use the special file /proc/sys/vm/drop_caches. By writing 3 to it, we can clear most of the disk cache:
$ free -m
total used free shared buffers cached
Mem: 1504 1471 33 0 36 801
-/+ buffers/cache: 633 871
Swap: 2047 6 2041

$ echo 3 | sudo tee /proc/sys/vm/drop_caches
3

$ free -m
total used free shared buffers cached
Mem: 1504 763 741 0 0 134
-/+ buffers/cache: 629 875
Swap: 2047 6 2041

Notice how "buffers" and "cached" went down, free mem went up, and free+buffers/cache stayed the same.


You can refer this link for additional information.

July 8, 2011

disable initial configuration tasks and server manager pop up when an admin logs

Launch MMC.exe and select File -> Add/Remove Snap-in, choose Group Policy and local computer and click ok.

Or gpedit.msc

From the Group policy snap-in, navigate to

Computer Configuration -> Administrative Templates -> System -> Server Manager.

There you will find 3 settings for Server Manager, one of it is the auto launch of Server Manager. Enable the setting. The auto launch of ICT is also available here as well.

For further click here

June 16, 2011

Printers On Thin Client

Suppose we have a WIndows 2008 Server and a thin client with IP as follows:

Server IP: 192.168.0.1
Thin Client IP: 192.168.0.10


Adding printer:

First we need to make sure that in thin client printer has been properly connected and printer (or some times prining) has been enabled in thin client.

Now from Server (192.168.0.1), you need to add the printer.

While adding the printer, select only the "Local printer attached to this computer option."

Leave the other two boxes unchecked and click Next.

Check the "Create a new port" box and use the drop-down to select "Standard TCP/IP Port."

On the following screen, you need to furnish the IP address of the printer ( ie IP Address of Thin Client).

After that, select your printer and continue until completion.

DOS Printing Or DOS apps to print from the printer

Go to Printers and Faxes and right click the printer. Select Sharing and then enable sharing, and provide a name for the shared printer or accept the default name (remember what you named it -- you'll need it in the next step. Also provide the name which is less than 8 character).

Open a command prompt (a "DOS" window) and type:

net use lpt1: \\servername\printersharedname /persistent:yes
Or
net use lpt1: \\192.168.0.1\printersharedname /persistent:yes


That command captures the print jobs sent to LPT1 by your DOS app, and redirects it to the printer. The "persistent" switch means that the command will load automatically everytime your PC starts.

To "undo" the command, at a command prompt just type:

net use lpt1: /delete

April 27, 2011

Active Directory Install Password Error in WIndows 2008 R2

When installing active directory on a Windows 2008 R2 server the error message:

—————————
Active Directory Domain Services Installation Wizard
—————————
The local Administrator account becomes the domain Administrator account when you create a new domain. The new domain cannot be created because the local Administrator account password does not meet requirements.

Currently, a password is not required for the local Administrator account. We recommend that you use the net user command-line tool with the /passwordreq:yes option to require a password for this account before you create the new domain; otherwise, a password will not be required for the domain Administrator account.

At a command prompt run:

net user administrator /passwordreq:yes

February 17, 2011

February 4, 2011

Preventing deletion of desktop icons

How to prevent deleting desktop icons


Log in to client pc with the domain administrators account.

Navigate to the user's profile folder, and open it.

Take ownership of the desktop folder, and de-select the inherit permissions check box.

Remove the user from the permissions access list. Then re-add him. The users permissions now should be read, execute, and and list.

Then copy the shortcuts to desktop of domain users.

Note:

Make sure that you should logon to client pc at least one time to have the user’s profiles like Desktop.

Can be implemented in domain and in work group too.

To enable domain users to login to Server locally and through RDP


To enable domain users to login to Server locally and through RDP


Please see the below image for more details:

Enable shutdown/restart permission for domain users

To enable shutdown/restart permission for domain users.


We need to edit the Group Policy and the GP is located here:
Start  Administrative Tools  Group Policy Management
Under Group Policy Management  Forest  Domain  Domain Name  Domain Controllers  Default Domain Controller Policy
Right Click on Default Domain Controller Policy and took “Edit” so that we will get
“Group Policy Management Editor” In that

User Configuration>Administrative Templates\Start Menu and Taskbar
Disable “Remove and prevent access to the Shut Down, Restart, and other options”

January 28, 2011

To disable auto login network share folders

/ comment is that everytime when we accesses network machines for eg: 192.168.1.2, it automatically opens. The first time, there was a prompt for the username and password. Now it never does that.

cmd-> net use \\IPorDOMAIN /del

Eg: cmd-> net use \\192.168.1.2 /del


Now windows can store login information for network locations and websites as follows:

start-->run--> rundll32.exe keymgr.dll, KRShowKeyMgr

January 8, 2011

Windows Server Backup Step-by-Step Guide for Windows Server 2008

Windows Server Backup Step-by-Step Guide for Windows Server 2008 can be viewed from the link from here


Note: You can no longer back up to tape. (However, support of tape storage drivers is still included in Windows Server 2008.) Windows Server Backup supports backing up to external and internal disks, DVDs, and shared folders.


NTbackup earlier in (windows 2003 server) was Veritas, since Veritas is now Symantec, they probably did not come to an agreement. Note, that the entire disk management suite has been rewritten, because the original one was Veritas as well.

January 3, 2011

Virtual Hard Disk in Windows 7 and Windows 2008 Server R2

A Virtual Hard Disk is a created VHD file on your hard drive that acts as a separate hard drive disk in Computer.

Its avaliable only in Windows 7 and Windows 2008 Server R2

For the documentation to to Create and Attach a Virtual Hard Disk in Windows 7
Click here

How to Shrink and Extend NTFS Volumes in Windows Vista / 7

For the documentation to Shrink and Extend NTFS Volumes in Windows Vista / 7 Click here