Named not starting...
Hi All,
We normally receive an issue with named not running.. and all the domains are not resolving from out side. Here is the check list to fix that.
first,
Check iptables / CSF / APF... stop these firewalls..
check "host domainname serverip" from outside
Still issues, Second fix
In named.conf, there may be two sets of named entries one for Internal and other for external.
You may need add the following two lines to the external section to fix the issue.
match-clients { any; };
match-destinations { any; };
So the that part of named will look like the following,
###################
view "external" {
/* This view will contain zones you want to serve only to "external" clients
* that have addresses that are not on your directly attached LAN interface subnets:
*/
match-clients { any; };
match-destinations { any; };
recursion no;
// you'd probably want to deny recursion to external clients, so you don't
// end up providing free DNS service to all takers
// all views must contain the root hints zone:
zone "." IN {
type hint;
file "/var/named/named.ca";
};
// These are your "authoritative" external zones, and would probably
// contain entries for just your web and mail servers:
// BEGIN external zone entries
STARTS the Zone entries here
############################
January 1, 2010
Error : client denied by server configuration while installing cacti
vi /etc/httpd/conf.d/cacti.conf
Order Deny,Allow
Deny from all
Allow from 127.0.0.1 (Change this line to "Allow from all")
Order Deny,Allow
Deny from all
Allow from 127.0.0.1 (Change this line to "Allow from all")
Cpanel script to correct mail directory's incorrect disk usage
Here is a tip from the cpanel team :
Sometimes mail accounts belonging to a particular cpanel account may show Zero disk usage, irrespective of the actual content. The following cpanel script can be used to fix this :
/scripts/generate_maildirsize --confirm --force --verbose baba38zz -- generates mail directory size for the user "baba38zz".
Please refer : https://tickets.cpanel.net/review/msg.cgi?ticketid=550573 for further clarification.
Sometimes mail accounts belonging to a particular cpanel account may show Zero disk usage, irrespective of the actual content. The following cpanel script can be used to fix this :
/scripts/generate_maildirsize --confirm --force --verbose baba38zz -- generates mail directory size for the user "baba38zz".
Please refer : https://tickets.cpanel.net/review/msg.cgi?ticketid=550573 for further clarification.
Exim Command set
1. How to remove How to remove all mails from exim queue?
==================
rm -rf /var/spool/exim/input/*
2. Deleting Frozen Mails:
==================
To remove all frozen mails from the exim queue, use the following command -
exim -bpr | grep frozen | awk {'print $3'} | xargs exim -Mrm
exiqgrep -z -i | xargs exim -Mrm
3. If you want to only delete frozen messages older than a day:
=============================================
exiqgrep -zi -o 86400 | xargs exim -Mrm
where you can change 86400 depending on the time frame you want to keep.( 1 day = 86400 seconds. ).
4. To forcefully deliver mails in queue, use the following exim command:
=====================================================
exim -bpru |awk '{print $3}' | xargs -n 1 -P 40 exim -v -M
To flush the mail queue:
======================
exim -qff
/usr/sbin/exim -qff
To clear spam mails from Exim Queue:
==============================
grep -R -l [SPAM] /var/spool/exim/msglog/*|cut -b26-|xargs exim -Mrm
To clear frozen mails from Exim Queue.
==============================
grep -R -l '*** Frozen' /var/spool/exim/msglog/*|cut -b26-|xargs exim -Mrm
To clear mails from Exim Queue for which recipient cannot not be verified.
=====================================================================
grep -R -l 'The recipient cannot be verified' /var/spool/exim/msglog/*|cut -b26-|xargs exim -Mrm
To find exim queue details. It will show ( Count Volume Oldest Newest Domain ) details.
=====================================================================
exim -bp |exiqsumm
How to remove root mails from exim queue ?
==================================
When mail queue is high due to root mails, and you only need to remove the root mails and not any other valid mails.
exim -bp |grep ""|awk '{print $3}'|xargs exim -Mrm
Replace "HOSTNAME" with server hostname
How to remove nobody mails from exim queue ?
==================================
When you need to clear nobody mails, you can use the following command.
exiqgrep -i -f nobody@HOSTNAME | xargs exim -Mrm (Use -f to search the queue for messages from a specific sender)
exiqgrep -i -r nobody@HOSTNAME | xargs exim -Mrm (Use -r to search the queue for messages for a specific recipient/domain)
Replace "HOSTNAME" with server hostname
Run a pretend SMTP transaction from the command line, as if it were coming from the given IP address. This will display Exim's checks, ACLs, and filters as they are applied. The message will NOT actually be delivered.
===========================
# exim -bh
==================
rm -rf /var/spool/exim/input/*
2. Deleting Frozen Mails:
==================
To remove all frozen mails from the exim queue, use the following command -
exim -bpr | grep frozen | awk {'print $3'} | xargs exim -Mrm
exiqgrep -z -i | xargs exim -Mrm
3. If you want to only delete frozen messages older than a day:
=============================================
exiqgrep -zi -o 86400 | xargs exim -Mrm
where you can change 86400 depending on the time frame you want to keep.( 1 day = 86400 seconds. ).
4. To forcefully deliver mails in queue, use the following exim command:
=====================================================
exim -bpru |awk '{print $3}' | xargs -n 1 -P 40 exim -v -M
To flush the mail queue:
======================
exim -qff
/usr/sbin/exim -qff
To clear spam mails from Exim Queue:
==============================
grep -R -l [SPAM] /var/spool/exim/msglog/*|cut -b26-|xargs exim -Mrm
To clear frozen mails from Exim Queue.
==============================
grep -R -l '*** Frozen' /var/spool/exim/msglog/*|cut -b26-|xargs exim -Mrm
To clear mails from Exim Queue for which recipient cannot not be verified.
=====================================================================
grep -R -l 'The recipient cannot be verified' /var/spool/exim/msglog/*|cut -b26-|xargs exim -Mrm
To find exim queue details. It will show ( Count Volume Oldest Newest Domain ) details.
=====================================================================
exim -bp |exiqsumm
How to remove root mails from exim queue ?
==================================
When mail queue is high due to root mails, and you only need to remove the root mails and not any other valid mails.
exim -bp |grep "
Replace "HOSTNAME" with server hostname
How to remove nobody mails from exim queue ?
==================================
When you need to clear nobody mails, you can use the following command.
exiqgrep -i -f nobody@HOSTNAME | xargs exim -Mrm (Use -f to search the queue for messages from a specific sender)
exiqgrep -i -r nobody@HOSTNAME | xargs exim -Mrm (Use -r to search the queue for messages for a specific recipient/domain)
Replace "HOSTNAME" with server hostname
Run a pretend SMTP transaction from the command line, as if it were coming from the given IP address. This will display Exim's checks, ACLs, and filters as they are applied. The message will NOT actually be delivered.
===========================
# exim -bh
Sharepoint Server
Refer the site
http://www.rodneyviana.com/portal/Sharepoint/MOSS2007Install1of2/tabid/55/Default.aspx
http://www.rodneyviana.com/portal/Sharepoint/MOSS2007Install2of2/tabid/57/Default.aspx
http://www.rodneyviana.com/portal/Sharepoint/MOSS2007Install1of2/tabid/55/Default.aspx
http://www.rodneyviana.com/portal/Sharepoint/MOSS2007Install2of2/tabid/57/Default.aspx
Windows Plesk I cannot connect to the MySQL database as root/admin
Windows Plesk I cannot connect to the MySQL database as root/admin
Solution Regretfully, MySQL is pre-installed by Plesk without the old-passwords functionality set true.
But the Plesk installer populates the admin user with a password encrypted in the old style!
To fix this you will need to enter your server with Remote Desktop and edit the follwoing file:
C:\SWSoft\Plesk\Databases\MySQL\Data\my.ini
There is a section in that file [mysqld]
Under the line starting "[mysqld]" add the following line:
old_passwords=1
so it looks like this:
[mysqld]
old_passwords=1
Now restart the MySQL service in the services manager.
You should now be able to manage MySQL with the user "admin" and the password is the same as your Plesk login. In some cases the password may still be set to the original password which is "setup".
If you still cannot log into the MySQL server then you will need to start the MySQL server with skip-grant-tables enabled.
Stop the MySQL service in the services manager.
Edit the my.ini file:
C:\SWSoft\Plesk\Databases\MySQL\Data\my.ini
Add "skip-grant-tables" to the [mysqld] section like so:
[mysqld]
old_passwords=1
skip-grant-tables
Now restart the MySQL service.
Now log into MySQL with the following command:
C:\SWSoft\Plesk\Databases\MySQL\bin\mysql "mysql"
At the MySQL prompt set the admin password:
mysql> update user set password=password('new_password') where user='admin';
Where "new_password" is the password of the PLESK admin login.
type "exit" at the MySQL prompt and then edit the my.ini file again and remove the line: skip-grant-tables
Now restart the MySQL service.
Your MySQL admin password has now been re-set.
Solution Regretfully, MySQL is pre-installed by Plesk without the old-passwords functionality set true.
But the Plesk installer populates the admin user with a password encrypted in the old style!
To fix this you will need to enter your server with Remote Desktop and edit the follwoing file:
C:\SWSoft\Plesk\Databases\MySQL\Data\my.ini
There is a section in that file [mysqld]
Under the line starting "[mysqld]" add the following line:
old_passwords=1
so it looks like this:
[mysqld]
old_passwords=1
Now restart the MySQL service in the services manager.
You should now be able to manage MySQL with the user "admin" and the password is the same as your Plesk login. In some cases the password may still be set to the original password which is "setup".
If you still cannot log into the MySQL server then you will need to start the MySQL server with skip-grant-tables enabled.
Stop the MySQL service in the services manager.
Edit the my.ini file:
C:\SWSoft\Plesk\Databases\MySQL\Data\my.ini
Add "skip-grant-tables" to the [mysqld] section like so:
[mysqld]
old_passwords=1
skip-grant-tables
Now restart the MySQL service.
Now log into MySQL with the following command:
C:\SWSoft\Plesk\Databases\MySQL\bin\mysql "mysql"
At the MySQL prompt set the admin password:
mysql> update user set password=password('new_password') where user='admin';
Where "new_password" is the password of the PLESK admin login.
type "exit" at the MySQL prompt and then edit the my.ini file again and remove the line: skip-grant-tables
Now restart the MySQL service.
Your MySQL admin password has now been re-set.
Script for sending event logs to a mail address
Script for sending event logs to a mail address
You can chosse nay name say eventlog_monitor.vbs.
strComputer = "."
Set objWMIService = GetObject("winmgmts:" _
& "{impersonationLevel=impersonate, (Security)}!\\" & _
strComputer & "\root\cimv2")
Set colMonitoredEvents = objWMIService.ExecNotificationQuery _
("Select * from __instancecreationevent where " _
& "TargetInstance isa 'Win32_NTLogEvent' " _
& "and TargetInstance.EventCode = '1002' ")
Do
Set objLatestEvent = colMonitoredEvents.NextEvent
strAlertToSend = objLatestEvent.TargetInstance.User _
& "ALERT!!! Event 1002 Siebel Application Crash file has been generated."
Set objEmail = CreateObject("CDO.Message")
objEmail.From = "eventemailnotifier@test.org"
objEmail.To = "test@test.org"
objEmail.cc = "test@test.org"
objEmail.Subject = "TESTSERVER logged event ID 1002, crash file generated"
objEmail.Textbody = "On TESTSERVER an event ID 1002 has been logged and a crash file has been generated. TO check the crash file click on the attached link and this will bring you to the BIN directory where the crash file resides."
objEmail.AddAttachment "C:\bin.lnk"
objEmail.Configuration.Fields.Item _
("http://schemas.microsoft.com/cdo/configuration/sendusing") = 2
objEmail.Configuration.Fields.Item _
("http://schemas.microsoft.com/cdo/configuration/smtpserver") = _
"smarthost.smarthost.com"
objEmail.Configuration.Fields.Item _
("http://schemas.microsoft.com/cdo/configuration/smtpserverport") = 25
objEmail.Configuration.Fields.Update
objEmail.Send
Loop
__________
===============================================================================
This is an alternative of the mentioned code. But I prefer the first one.
Create the first file event.vbs
_________________event.vbs________________________ ________
strComputer = "sbappdev01"
Set objWMIService = GetObject("winmgmts:" _
& "{impersonationLevel=impersonate, (Security)}!\\" & _
strComputer & "\root\cimv2")
Set colMonitoredEvents = objWMIService.ExecNotificationQuery _
("Select * from __instancecreationevent where " _
& "TargetInstance isa 'Win32_NTLogEvent' " _
& "and TargetInstance.EventCode = '1002' ")
Do
Set objLatestEvent = colMonitoredEvents.NextEvent
strAlertToSend = objLatestEvent.TargetInstance.User _
& "ALERT!!! Event 1002 Siebel Application Crash file has been generated."
Wscript.Echo strAlertToSend
Loop
_________________event.vbs________________________ ________
Create the second file email.vbs
_________________email.vbs________________________ ________
'**************************************
' Name: A++ Send Lotus Notes Email VBS S
' cript
' Description:Simple script to send a Lo
' tus Notes email. Can be modified to auto
' mate tasks on the server.
' By: Steven Jacobs
'
'This code is copyrighted and has ' limited warranties.Please see http://w
' ww.Planet-Source-Code.com/vb/scripts/Sho
' wCode.asp?txtCodeId=8815&lngWId=4 'for details. '**************************************
On Error Goto 0: sendLNMail()
Dim s
Dim db
Dim doc
Dim rtitem
Dim subj
Dim bdy
Dim recips(2)
'File System Object Decs
Dim fs
Dim fName
Dim path
Sub sendLNMail()
On Error Resume Next
'///////////////////////////////////////
' ////////////////////////////////
'Begin Error/Input Routines
'Created by Steven Jacobs
'2004
'///////////////////////////////////////
' ////////////////////////////////
'Get subject...if no subject, exit sub
subj = "Siebel Crash File"
if subj = "test" Then
MsgBox "You need a subject"
Exit Sub
End if
'Get body text...if no body text, exit s
' ub
bdy = "On Servertest Event ID 1002 has been logged, a crash file has been created in the BIN directory. Click on the attached link to open the BIN directory"
if bdy = "test" Then
MsgBox "You need body text"
Exit Sub
End if
Set fs = createobject("Scripting.FileSystemObject")
if fs Is Nothing Then
MsgBox "Could Not Create FileSystemObject",16,"File System Object Error."
endMe
Exit Sub
End if
fName = "C:\bin.lnk"
if fName = "" Then
MsgBox "Empty Path"
endMe
Exit Sub
End if
path = fs.GetAbsolutePathName(fName)
if Not fs.FileExists(path) Then
MsgBox "File does Not exist In directory you specified"
endMe
End if
'///////////////////////////////////////
' ////////////////////////////////
'End Error/Input Routines
'///////////////////////////////////////
' ////////////////////////////////
Set s = createobject("Notes.NotesSession")
if s Is Nothing Then
MsgBox "Could Not Create A Session Of Notes",16,"Notes Session Error."
endMe
Exit Sub
End if
'See if we can create the main object (s
' ession)
if Err.Number <> 0 Then
On Error Goto 0
MsgBox "Could Not create session 'Lotus Notes' from object"
Exit Sub
End if
Set db = s.getdatabase(s.getenvironmentstring("MailServer",True),s.getenvironmentstring("Mailfile",true))
'See if we can a handle on the mail file
'
if Err.Number <> 0 Then
On Error Goto 0
MsgBox "Could find or Get a handle on the mail file"
Exit Sub
End if
Set doc = db.createdocument
Set rtitem = doc.createrichtextitem("BODY")
recips(1) = "yourname@yourname.org"
recips(2) - "yourname@yourname.org"
With doc
.form = "Memo"
.subject = subj
.sendto = "
.body = bdy
.postdate = Date
End With
call rtitem.embedobject(1454,"",fName)
doc.visible = True
doc.send False
'if we made it this far, alert the user
' the mail memo has been created and sent
endMe
End Sub
Sub endMe()
'clean objects/memory
Set s = nothing
Set db = nothing
Set doc = nothing
Set rtitem = nothing
Set fs = nothing
End Sub
_________________email.vbs________________________ ________
Create a bat file to execute the scripts in order, say log_execute.bat
=================
EX.
@ECHO OFF
Call event.vbs
Call email.vbs
=================
You can chosse nay name say eventlog_monitor.vbs.
strComputer = "."
Set objWMIService = GetObject("winmgmts:" _
& "{impersonationLevel=impersonate, (Security)}!\\" & _
strComputer & "\root\cimv2")
Set colMonitoredEvents = objWMIService.ExecNotificationQuery _
("Select * from __instancecreationevent where " _
& "TargetInstance isa 'Win32_NTLogEvent' " _
& "and TargetInstance.EventCode = '1002' ")
Do
Set objLatestEvent = colMonitoredEvents.NextEvent
strAlertToSend = objLatestEvent.TargetInstance.User _
& "ALERT!!! Event 1002 Siebel Application Crash file has been generated."
Set objEmail = CreateObject("CDO.Message")
objEmail.From = "eventemailnotifier@test.org"
objEmail.To = "test@test.org"
objEmail.cc = "test@test.org"
objEmail.Subject = "TESTSERVER logged event ID 1002, crash file generated"
objEmail.Textbody = "On TESTSERVER an event ID 1002 has been logged and a crash file has been generated. TO check the crash file click on the attached link and this will bring you to the BIN directory where the crash file resides."
objEmail.AddAttachment "C:\bin.lnk"
objEmail.Configuration.Fields.Item _
("http://schemas.microsoft.com/cdo/configuration/sendusing") = 2
objEmail.Configuration.Fields.Item _
("http://schemas.microsoft.com/cdo/configuration/smtpserver") = _
"smarthost.smarthost.com"
objEmail.Configuration.Fields.Item _
("http://schemas.microsoft.com/cdo/configuration/smtpserverport") = 25
objEmail.Configuration.Fields.Update
objEmail.Send
Loop
__________
===============================================================================
This is an alternative of the mentioned code. But I prefer the first one.
Create the first file event.vbs
_________________event.vbs________________________ ________
strComputer = "sbappdev01"
Set objWMIService = GetObject("winmgmts:" _
& "{impersonationLevel=impersonate, (Security)}!\\" & _
strComputer & "\root\cimv2")
Set colMonitoredEvents = objWMIService.ExecNotificationQuery _
("Select * from __instancecreationevent where " _
& "TargetInstance isa 'Win32_NTLogEvent' " _
& "and TargetInstance.EventCode = '1002' ")
Do
Set objLatestEvent = colMonitoredEvents.NextEvent
strAlertToSend = objLatestEvent.TargetInstance.User _
& "ALERT!!! Event 1002 Siebel Application Crash file has been generated."
Wscript.Echo strAlertToSend
Loop
_________________event.vbs________________________ ________
Create the second file email.vbs
_________________email.vbs________________________ ________
'**************************************
' Name: A++ Send Lotus Notes Email VBS S
' cript
' Description:Simple script to send a Lo
' tus Notes email. Can be modified to auto
' mate tasks on the server.
' By: Steven Jacobs
'
'This code is copyrighted and has ' limited warranties.Please see http://w
' ww.Planet-Source-Code.com/vb/scripts/Sho
' wCode.asp?txtCodeId=8815&lngWId=4 'for details. '**************************************
On Error Goto 0: sendLNMail()
Dim s
Dim db
Dim doc
Dim rtitem
Dim subj
Dim bdy
Dim recips(2)
'File System Object Decs
Dim fs
Dim fName
Dim path
Sub sendLNMail()
On Error Resume Next
'///////////////////////////////////////
' ////////////////////////////////
'Begin Error/Input Routines
'Created by Steven Jacobs
'2004
'///////////////////////////////////////
' ////////////////////////////////
'Get subject...if no subject, exit sub
subj = "Siebel Crash File"
if subj = "test" Then
MsgBox "You need a subject"
Exit Sub
End if
'Get body text...if no body text, exit s
' ub
bdy = "On Servertest Event ID 1002 has been logged, a crash file has been created in the BIN directory. Click on the attached link to open the BIN directory"
if bdy = "test" Then
MsgBox "You need body text"
Exit Sub
End if
Set fs = createobject("Scripting.FileSystemObject")
if fs Is Nothing Then
MsgBox "Could Not Create FileSystemObject",16,"File System Object Error."
endMe
Exit Sub
End if
fName = "C:\bin.lnk"
if fName = "" Then
MsgBox "Empty Path"
endMe
Exit Sub
End if
path = fs.GetAbsolutePathName(fName)
if Not fs.FileExists(path) Then
MsgBox "File does Not exist In directory you specified"
endMe
End if
'///////////////////////////////////////
' ////////////////////////////////
'End Error/Input Routines
'///////////////////////////////////////
' ////////////////////////////////
Set s = createobject("Notes.NotesSession")
if s Is Nothing Then
MsgBox "Could Not Create A Session Of Notes",16,"Notes Session Error."
endMe
Exit Sub
End if
'See if we can create the main object (s
' ession)
if Err.Number <> 0 Then
On Error Goto 0
MsgBox "Could Not create session 'Lotus Notes' from object"
Exit Sub
End if
Set db = s.getdatabase(s.getenvironmentstring("MailServer",True),s.getenvironmentstring("Mailfile",true))
'See if we can a handle on the mail file
'
if Err.Number <> 0 Then
On Error Goto 0
MsgBox "Could find or Get a handle on the mail file"
Exit Sub
End if
Set doc = db.createdocument
Set rtitem = doc.createrichtextitem("BODY")
recips(1) = "yourname@yourname.org"
recips(2) - "yourname@yourname.org"
With doc
.form = "Memo"
.subject = subj
.sendto = "
.postdate = Date
End With
call rtitem.embedobject(1454,"",fName)
doc.visible = True
doc.send False
'if we made it this far, alert the user
' the mail memo has been created and sent
endMe
End Sub
Sub endMe()
'clean objects/memory
Set s = nothing
Set db = nothing
Set doc = nothing
Set rtitem = nothing
Set fs = nothing
End Sub
_________________email.vbs________________________ ________
Create a bat file to execute the scripts in order, say log_execute.bat
=================
EX.
@ECHO OFF
Call event.vbs
Call email.vbs
=================
Update TTL value of all domains in windows Plesk server
MAKE SURE TO TAKE THE BACKUP OF 'psa' DATABASE.
Start > run >cmd
cd %plesk_bin%
C:\Program Files\SWsoft\Plesk\admin\bin\cd ..
C:\Program Files\SWsoft\Plesk\admin\cd ..
C:\Program Files\SWsoft\Plesk\cd MySQL\bin
C:\Program Files\SWsoft\Plesk\MySQL\bin\
C:\Program Files\SWsoft\Plesk\MySQL\bin>mysql -uadmin -p -P8306
Enter password: ***********
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 1817 to server version: 4.1.22-community-nt
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql> use psa;
Database changed
mysql> desc dns_zone;
+---------------+------------------------------------+------+-----+-------------
--+----------------+
| Field | Type | Null | Key | Default
| Extra |
+---------------+------------------------------------+------+-----+-------------
--+----------------+
| id | int(10) unsigned | | PRI | NULL
| auto_increment |
| name | varchar(255) | | |
| |
| displayName | varchar(255) | | |
| |
| email | varchar(255) | YES | | NULL
| |
| status | int(10) unsigned | | | 0
| |
| type | enum('slave','master') | | | master
| |
| ttl | int(10) unsigned | | | 86400
| |
| ttl_unit | int(10) unsigned | | | 1
| |
| refresh | int(10) unsigned | | | 10800
| |
| refresh_unit | int(10) unsigned | | | 1
| |
| retry | int(10) unsigned | | | 3600
| |
| retry_unit | int(10) unsigned | | | 1
| |
| expire | int(10) unsigned | | | 604800
| |
| expire_unit | int(10) unsigned | | | 1
| |
| minimum | int(10) unsigned | | | 86400
| |
| minimum_unit | int(10) unsigned | | | 1
| |
| serial_format | enum('UNIXTIMESTAMP','YYYYMMDDNN') | | | UNIXTIMESTAM
P | |
| serial | varchar(12) | | | 0
| |
+---------------+------------------------------------+------+-----+-------------
--+----------------+
18 rows in set (0.00 sec)
mysql> update dns_zone set ttl="180" where ttl="86400";
Query OK, 55 rows affected (0.05 sec)
Rows matched: 55 Changed: 55 Warnings: 0
mysql> update dns_zone set ttl_unit="180" where ttl_unit="86400";
Query OK, 32 rows affected (0.03 sec)
Rows matched: 32 Changed: 32 Warnings: 0
mysql> select ttl from dns_zone;
finally Run
----------------
C:\Program Files\SWsoft\Plesk\admin\bin>"%plesk_bin%\dnsmng" update *
Also please go thru the following link.
http://kb.parallels.com/en/1114
Please run the query given in the link only after describing the table. DO NOT BLINDLY APPLY THE QUERY GIVEN IN LINK.
MAKE SURE TO TAKE THE BACKUP OF 'psa' DATABASE.
Start > run >cmd
cd %plesk_bin%
C:\Program Files\SWsoft\Plesk\admin\bin\cd ..
C:\Program Files\SWsoft\Plesk\admin\cd ..
C:\Program Files\SWsoft\Plesk\cd MySQL\bin
C:\Program Files\SWsoft\Plesk\MySQL\bin\
C:\Program Files\SWsoft\Plesk\MySQL\bin>mysql -uadmin -p -P8306
Enter password: ***********
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 1817 to server version: 4.1.22-community-nt
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql> use psa;
Database changed
mysql> desc dns_zone;
+---------------+------------------------------------+------+-----+-------------
--+----------------+
| Field | Type | Null | Key | Default
| Extra |
+---------------+------------------------------------+------+-----+-------------
--+----------------+
| id | int(10) unsigned | | PRI | NULL
| auto_increment |
| name | varchar(255) | | |
| |
| displayName | varchar(255) | | |
| |
| email | varchar(255) | YES | | NULL
| |
| status | int(10) unsigned | | | 0
| |
| type | enum('slave','master') | | | master
| |
| ttl | int(10) unsigned | | | 86400
| |
| ttl_unit | int(10) unsigned | | | 1
| |
| refresh | int(10) unsigned | | | 10800
| |
| refresh_unit | int(10) unsigned | | | 1
| |
| retry | int(10) unsigned | | | 3600
| |
| retry_unit | int(10) unsigned | | | 1
| |
| expire | int(10) unsigned | | | 604800
| |
| expire_unit | int(10) unsigned | | | 1
| |
| minimum | int(10) unsigned | | | 86400
| |
| minimum_unit | int(10) unsigned | | | 1
| |
| serial_format | enum('UNIXTIMESTAMP','YYYYMMDDNN') | | | UNIXTIMESTAM
P | |
| serial | varchar(12) | | | 0
| |
+---------------+------------------------------------+------+-----+-------------
--+----------------+
18 rows in set (0.00 sec)
mysql> update dns_zone set ttl="180" where ttl="86400";
Query OK, 55 rows affected (0.05 sec)
Rows matched: 55 Changed: 55 Warnings: 0
mysql> update dns_zone set ttl_unit="180" where ttl_unit="86400";
Query OK, 32 rows affected (0.03 sec)
Rows matched: 32 Changed: 32 Warnings: 0
mysql> select ttl from dns_zone;
finally Run
----------------
C:\Program Files\SWsoft\Plesk\admin\bin>"%plesk_bin%\dnsmng" update *
Also please go thru the following link.
http://kb.parallels.com/en/1114
Please run the query given in the link only after describing the table. DO NOT BLINDLY APPLY THE QUERY GIVEN IN LINK.
MAKE SURE TO TAKE THE BACKUP OF 'psa' DATABASE.
Get the OS version
If you are not sure which *nix based distro you are using, try the following.
cat /etc/redhat-release
cat /etc/debian_version
cat /etc/SuSE-release
cat /etc/slackware-version
cat /etc/gentoo-release
You could also try 'cat /etc/*-release' or 'cat /etc/*-version'.
cat /etc/redhat-release
cat /etc/debian_version
cat /etc/SuSE-release
cat /etc/slackware-version
cat /etc/gentoo-release
You could also try 'cat /etc/*-release' or 'cat /etc/*-version'.
Cpanel Pre-requisites
If you are submitting an OS reload ticket for a server on which Cpanel is to be installed, please ensure that the OS is one of those which is listed below. Please do not give any other OS including Fedora as they are not supported by Cpanel.
* CentOS (free) versions 3.x, 4.x, 5.x
* Red Hat® Enterprise Linux® versions 2.1, 3.x, 4.x, 5.x
* FreeBSD®-RELEASE versions 6.0, 6.1, 6.2, 6.3, 7.0
See the complete requirement here including details of the Virtual Environment supported.
http://cpanel.net/products/cpanelwhm/system-requirements.html
* CentOS (free) versions 3.x, 4.x, 5.x
* Red Hat® Enterprise Linux® versions 2.1, 3.x, 4.x, 5.x
* FreeBSD®-RELEASE versions 6.0, 6.1, 6.2, 6.3, 7.0
See the complete requirement here including details of the Virtual Environment supported.
http://cpanel.net/products/cpanelwhm/system-requirements.html
Some useful bash scripts
svn ls -R | egrep -v -e "\/$" | xargs svn blame | awk '{print $2}' | sort | uniq -c | sort -r
Prints total line count contribution per user for an SVN repository.
I'm working in a group project currently and annoyed at the lack of output by my teammates. Wanting hard metrics of how awesome I am and how awesome they aren't, I wrote this command up.
It will print a full repository listing of all files, remove the directories which confuse blame, run svn blame on each individual file, and tally the resulting line counts. It seems quite slow, depending on your repository location, because blame must hit the server for each individual file. You can remove the -R on the first part to print out the tallies for just the current directory.
man -P cat ls > man_ls.txt
save a manpage to plaintext file. Output manpage as plaintext using cat as pager: man -P cat commandname
And redirect its stdout into a file: man -P cat commandname > textfile.txt
ls /var/lib/dpkg/info/*.list -lht |less
Find the dates your debian/ubuntu packages were installed.
ls -drt /var/log/* | tail -n5 | xargs sudo tail -n0 -f
Follow the most recently updated log files. This command finds the 5 (-n5) most frequently updated logs in /var/log, and then does a multifile tail follow of those log files. Alternately, you can do this to follow a specific list of log files:
sudo tail -n0 -f /var/log/{messages,secure,cron,cups/error_log}
find / | xargs ls -l | tr -s ' ' | cut -d ' ' -f 1,3,4,9
Command for getting the list of files with perms, owners, groups info. Useful to find the checksum of 2 machines/images.
ls -1 |grep -v .jpg |xargs rm
Delete files if not have some extension. Delete files in a directory, if those not have some extension. Useful if we want to maintain only one kind of file in our directories.
ls *tgz | xargs -n1 tar xzf
extract all tgz in current dir
man -Tps ls >> ls_manpage.ps && ps2pdf ls_manpage.ps
Convert man page to PDF. Creates a PDF (over ps as intermediate format) out of any given manpage. Other useful arguments for the -T switch are dvi, utf8 or latin1.
ls /mnt/badfs &
Check if filesystem hangs. When a fs hangs and you've just one console, even # ls could be a dangerous command. Simply put a trailing "&" and play safe
ls -1t | head -n10
find the 10 latest (modified) files
order the files by modification time, one file per output line and filter first 10
ls -s | sort -nr | more
find large files
man ls | col -b > ~/Desktop/man_ls.txt
Convert "man page" to text fil. You can convert any UNIX man page to .txt
ls -la | sort -k 5bn
Huh? Where did all my precious space go ? Sort ls output of all files in current directory in ascending order
Just the 20 biggest ones:
ls -la | sort -k 5bn | tail -n 20
A variant for the current directory tree with subdirectories and pretty columns is:
find . -type f -print0 | xargs -0 ls -la | sort -k 5bn | column -t
And finding the subdirectories consuming the most space with displayed block size 1k:
du -sk ./* | sort -k 1bn | column -t
ls | curl -F 'sprunge=<-' http://sprunge.us | xclip Run a command, store the output in a pastebin on the internet and place the URL on the xclipboard The URL can then be pasted with a middle click. This is probably useful when trying to explain problems over instant messaging when you don't have some sort of shared desktop. find ./* -ctime -1 | xargs ls -ltr --color files and directories in the last 1 hour added alias in ~/.bashrc alias lf='find ./* -ctime -1 | xargs ls -ltr --color' less -Rf <( cat <(ls -l --color=always) <(ls -ld --color=always .*) ) Scrollable Colorized Long Listing - Hidden Files Sorted Last To sort hidden files first, simply switch the two inner `ls` commands. I have this aliased to `dira` `dir` is aliased to the simpler version with no hidden files: for dir in $(ls -l | grep ^d | awk -F" " '{ print $9 }') ; do cd $dir; echo script is in $dir ; cd .. ; done perform an action in each subdirectory of the current working directory Will find directory names via `ls -l | grep ^d | awk -F" " '{ print $9 }'`, cd in them one at a time for every directory, and perform some action ( `echo script is in $dir` in the example ). The `awk -F" " '{ print $9 }'` can be written in the shorter form `awk '{ print $9 }' `, forgoing the -F" " which tells awk that the columns are separated by a single space, which is the default for awk. I like to include the -F" " to remind myself of the syntax. find / \( -name "*.log" -o -name "*.mylogs" \) -exec ls -lrt {} \; | sort -k6,8 | head -n1 | cut -d" " -f8- | tr -d '\n' | xargs -0 rm Find and delete oldest file of specific types in directory tree This works on my ubuntu/debian machines. I suspect other distros need some tweaking of sort and cut. I am sure someone could provide a shorter/faster version. for files in $(ls -A directory_name); do sed 's/search/replaced/g' $files > $files.new && mv $files.new $files; done;
Search and replace in multiple files and save them with the same names - quickly and effectively!
Yeah, there are many ways to do that.
Doing with sed by using a for loop is my favourite, because these are two basic things in all *nix environments. Sed by default does not allow to save the output in the same files so we'll use mv to do that in batch along with the sed.
p=$(netstat -nate 2>/dev/null | awk '/LISTEN/ {gsub (/.*:/, "", $4); if ($4 == "4444") {print $8}}'); for i in $(ls /proc/|grep "^[1-9]"); do [[ $(ls -l /proc/$i/fd/|grep socket|sed -e 's|.*\[\(.*\)\]|\1|'|grep $p) ]] && cat /proc/$i/cmdline && echo; done
netstat -p recoded (totaly useless..)
Ok so it's rellay useless line and I sorry for that, furthermore that's nothing optimized at all...
At the beginning I didn't managed by using netstat -p to print out which process was handling that open port 4444, I realize at the end I was not root and security restrictions applied ;p
It's nevertheless a (good ?) way to see how ps(tree) works, as it acts exactly the same way by reading in /proc
So for a specific port, this line returns the calling command line of every thread that handle the associated socket
ls | sed -n -r 's/banana_(.*)_([0-9]*).asc/mv & banana_\2_\1.asc/gp' | sh
Smart renaming
A powerfull way to rename file using sed groups.
& stand for the matched expression.
\1 referes to the first group between parenthesis. \2 to the second.
ls -S -lhr
list and sort files by size in reverse order (file size in human readable output)
This command list and sort files by size and in reverse order, the reverse order is very helpful when you have a very long list and wish to have the biggest files at the bottom so you don't have scrool up.
The file size info is in human readable output, so ex. 1K..234M...3G
Tested with Linux (Red Hat Enterprise Edition)
for img in $( ls *.CR2 ); do convert $img $img.jpg; done
Convert Raw pictures to jpg
ls /home | head -64 | barcode -t 4x16 | lpr
printing barcodes
64 elements max on 16 rows, 4 cols.
GNU Barcode will adapt automagically the width and the eight of your elements to fill the page.
Standard output format is PostScript.
find -type f -printf '%P\000' | egrep -iz '\.(avi|mpg|mov|flv|wmv|asf|mpeg|m4v|divx|mp4|mkv)$' | sort -z | xargs -0 ls -1
Show all video files in the current directory (and sub-dirs).
sed -n '/^function h\(\)/,/^}/p' script.sh
Extract a bash function
I often need to extract a function from a bash script and this command will do it.
geoip(){curl -s "http://www.geody.com/geoip.php?ip=${1}" | sed '/^IP:/!d;s/<[^>][^>]*>//g' ;}
geoip lookup
echo start > battery.txt; watch -n 60 'date >> battery.txt ; acpi -b >> battery.txt'
Battery real life energy vs predicted remaining plotted. This time I added a print to reemaining energy, every minute, time stamped.
The example shown here is complete and point to large discrepancies as time passes, converging to accuracy near the end.
for i in $(netstat --inet -n|grep ESTA|awk '{print $5}'|cut -d: -f1);do geoiplookup $i;done
Localize provenance of current established connections. Sample command to obtain a list of geographic localization for established connections, extracted from netstat. Need geoiplookup command ( part of geoip package under CentOS)
echo start > battery.txt; watch -n 60 'date >> battery.txt'
Do you really believe on Battery Remaining Time? Confirm it from time to time!
Fully recharge your computer battery and start this script.
It will create or clean the file named battery.txt, print a start on it and every minute it will append a time stamp to it.
Batteries last few hours, and each hour will have 60 lines of time stamping. Really good for assuring the system was tested in real life with no surprises.
The last time stamp inside the battery.txt file is of interest. It is the time the computer went off, as the battery was dead!
Turn on your computer after that, on AC power of course, and open battery.txt. Read the first and last time stamps and now you really know if you can trust your computer sensors.
If you want a simple line of text inside the battery.txt file, use this:
watch -n 60 'date > battery.txt'
The time of death will be printed inside
weather() { lynx -dump "http://mobile.weather.gov/port_zh.php?inputstring=$*" | sed 's/^ *//;/ror has occ/q;2h;/__/!{x;s/\n.*//;x;H;d};x;s/\n/ -- /;q';}
Show current weather for any US city or zipcode
Scrape the National Weather Service.
ord() { printf "%d\n" "'$1"; }
Get decimal ascii code from character
printf treats first char after single ' as numeric equivalent
for file in *.iso; do mkdir `basename $file | awk -F. '{print $1}'`; sudo mount -t iso9660 -o loop $file `basename $file | awk -F. '{print $1}'`; done
Make directories for and mount all iso files in a folder.
Prints total line count contribution per user for an SVN repository.
I'm working in a group project currently and annoyed at the lack of output by my teammates. Wanting hard metrics of how awesome I am and how awesome they aren't, I wrote this command up.
It will print a full repository listing of all files, remove the directories which confuse blame, run svn blame on each individual file, and tally the resulting line counts. It seems quite slow, depending on your repository location, because blame must hit the server for each individual file. You can remove the -R on the first part to print out the tallies for just the current directory.
man -P cat ls > man_ls.txt
save a manpage to plaintext file. Output manpage as plaintext using cat as pager: man -P cat commandname
And redirect its stdout into a file: man -P cat commandname > textfile.txt
ls /var/lib/dpkg/info/*.list -lht |less
Find the dates your debian/ubuntu packages were installed.
ls -drt /var/log/* | tail -n5 | xargs sudo tail -n0 -f
Follow the most recently updated log files. This command finds the 5 (-n5) most frequently updated logs in /var/log, and then does a multifile tail follow of those log files. Alternately, you can do this to follow a specific list of log files:
sudo tail -n0 -f /var/log/{messages,secure,cron,cups/error_log}
find / | xargs ls -l | tr -s ' ' | cut -d ' ' -f 1,3,4,9
Command for getting the list of files with perms, owners, groups info. Useful to find the checksum of 2 machines/images.
ls -1 |grep -v .jpg |xargs rm
Delete files if not have some extension. Delete files in a directory, if those not have some extension. Useful if we want to maintain only one kind of file in our directories.
ls *tgz | xargs -n1 tar xzf
extract all tgz in current dir
man -Tps ls >> ls_manpage.ps && ps2pdf ls_manpage.ps
Convert man page to PDF. Creates a PDF (over ps as intermediate format) out of any given manpage. Other useful arguments for the -T switch are dvi, utf8 or latin1.
ls /mnt/badfs &
Check if filesystem hangs. When a fs hangs and you've just one console, even # ls could be a dangerous command. Simply put a trailing "&" and play safe
ls -1t | head -n10
find the 10 latest (modified) files
order the files by modification time, one file per output line and filter first 10
ls -s | sort -nr | more
find large files
man ls | col -b > ~/Desktop/man_ls.txt
Convert "man page" to text fil. You can convert any UNIX man page to .txt
ls -la | sort -k 5bn
Huh? Where did all my precious space go ? Sort ls output of all files in current directory in ascending order
Just the 20 biggest ones:
ls -la | sort -k 5bn | tail -n 20
A variant for the current directory tree with subdirectories and pretty columns is:
find . -type f -print0 | xargs -0 ls -la | sort -k 5bn | column -t
And finding the subdirectories consuming the most space with displayed block size 1k:
du -sk ./* | sort -k 1bn | column -t
ls | curl -F 'sprunge=<-' http://sprunge.us | xclip Run a command, store the output in a pastebin on the internet and place the URL on the xclipboard The URL can then be pasted with a middle click. This is probably useful when trying to explain problems over instant messaging when you don't have some sort of shared desktop. find ./* -ctime -1 | xargs ls -ltr --color files and directories in the last 1 hour added alias in ~/.bashrc alias lf='find ./* -ctime -1 | xargs ls -ltr --color' less -Rf <( cat <(ls -l --color=always) <(ls -ld --color=always .*) ) Scrollable Colorized Long Listing - Hidden Files Sorted Last To sort hidden files first, simply switch the two inner `ls` commands. I have this aliased to `dira` `dir` is aliased to the simpler version with no hidden files: for dir in $(ls -l | grep ^d | awk -F" " '{ print $9 }') ; do cd $dir; echo script is in $dir ; cd .. ; done perform an action in each subdirectory of the current working directory Will find directory names via `ls -l | grep ^d | awk -F" " '{ print $9 }'`, cd in them one at a time for every directory, and perform some action ( `echo script is in $dir` in the example ). The `awk -F" " '{ print $9 }'` can be written in the shorter form `awk '{ print $9 }' `, forgoing the -F" " which tells awk that the columns are separated by a single space, which is the default for awk. I like to include the -F" " to remind myself of the syntax. find / \( -name "*.log" -o -name "*.mylogs" \) -exec ls -lrt {} \; | sort -k6,8 | head -n1 | cut -d" " -f8- | tr -d '\n' | xargs -0 rm Find and delete oldest file of specific types in directory tree This works on my ubuntu/debian machines. I suspect other distros need some tweaking of sort and cut. I am sure someone could provide a shorter/faster version. for files in $(ls -A directory_name); do sed 's/search/replaced/g' $files > $files.new && mv $files.new $files; done;
Search and replace in multiple files and save them with the same names - quickly and effectively!
Yeah, there are many ways to do that.
Doing with sed by using a for loop is my favourite, because these are two basic things in all *nix environments. Sed by default does not allow to save the output in the same files so we'll use mv to do that in batch along with the sed.
p=$(netstat -nate 2>/dev/null | awk '/LISTEN/ {gsub (/.*:/, "", $4); if ($4 == "4444") {print $8}}'); for i in $(ls /proc/|grep "^[1-9]"); do [[ $(ls -l /proc/$i/fd/|grep socket|sed -e 's|.*\[\(.*\)\]|\1|'|grep $p) ]] && cat /proc/$i/cmdline && echo; done
netstat -p recoded (totaly useless..)
Ok so it's rellay useless line and I sorry for that, furthermore that's nothing optimized at all...
At the beginning I didn't managed by using netstat -p to print out which process was handling that open port 4444, I realize at the end I was not root and security restrictions applied ;p
It's nevertheless a (good ?) way to see how ps(tree) works, as it acts exactly the same way by reading in /proc
So for a specific port, this line returns the calling command line of every thread that handle the associated socket
ls | sed -n -r 's/banana_(.*)_([0-9]*).asc/mv & banana_\2_\1.asc/gp' | sh
Smart renaming
A powerfull way to rename file using sed groups.
& stand for the matched expression.
\1 referes to the first group between parenthesis. \2 to the second.
ls -S -lhr
list and sort files by size in reverse order (file size in human readable output)
This command list and sort files by size and in reverse order, the reverse order is very helpful when you have a very long list and wish to have the biggest files at the bottom so you don't have scrool up.
The file size info is in human readable output, so ex. 1K..234M...3G
Tested with Linux (Red Hat Enterprise Edition)
for img in $( ls *.CR2 ); do convert $img $img.jpg; done
Convert Raw pictures to jpg
ls /home | head -64 | barcode -t 4x16 | lpr
printing barcodes
64 elements max on 16 rows, 4 cols.
GNU Barcode will adapt automagically the width and the eight of your elements to fill the page.
Standard output format is PostScript.
find -type f -printf '%P\000' | egrep -iz '\.(avi|mpg|mov|flv|wmv|asf|mpeg|m4v|divx|mp4|mkv)$' | sort -z | xargs -0 ls -1
Show all video files in the current directory (and sub-dirs).
sed -n '/^function h\(\)/,/^}/p' script.sh
Extract a bash function
I often need to extract a function from a bash script and this command will do it.
geoip(){curl -s "http://www.geody.com/geoip.php?ip=${1}" | sed '/^IP:/!d;s/<[^>][^>]*>//g' ;}
geoip lookup
echo start > battery.txt; watch -n 60 'date >> battery.txt ; acpi -b >> battery.txt'
Battery real life energy vs predicted remaining plotted. This time I added a print to reemaining energy, every minute, time stamped.
The example shown here is complete and point to large discrepancies as time passes, converging to accuracy near the end.
for i in $(netstat --inet -n|grep ESTA|awk '{print $5}'|cut -d: -f1);do geoiplookup $i;done
Localize provenance of current established connections. Sample command to obtain a list of geographic localization for established connections, extracted from netstat. Need geoiplookup command ( part of geoip package under CentOS)
echo start > battery.txt; watch -n 60 'date >> battery.txt'
Do you really believe on Battery Remaining Time? Confirm it from time to time!
Fully recharge your computer battery and start this script.
It will create or clean the file named battery.txt, print a start on it and every minute it will append a time stamp to it.
Batteries last few hours, and each hour will have 60 lines of time stamping. Really good for assuring the system was tested in real life with no surprises.
The last time stamp inside the battery.txt file is of interest. It is the time the computer went off, as the battery was dead!
Turn on your computer after that, on AC power of course, and open battery.txt. Read the first and last time stamps and now you really know if you can trust your computer sensors.
If you want a simple line of text inside the battery.txt file, use this:
watch -n 60 'date > battery.txt'
The time of death will be printed inside
weather() { lynx -dump "http://mobile.weather.gov/port_zh.php?inputstring=$*" | sed 's/^ *//;/ror has occ/q;2h;/__/!{x;s/\n.*//;x;H;d};x;s/\n/ -- /;q';}
Show current weather for any US city or zipcode
Scrape the National Weather Service.
ord() { printf "%d\n" "'$1"; }
Get decimal ascii code from character
printf treats first char after single ' as numeric equivalent
for file in *.iso; do mkdir `basename $file | awk -F. '{print $1}'`; sudo mount -t iso9660 -o loop $file `basename $file | awk -F. '{print $1}'`; done
Make directories for and mount all iso files in a folder.
Enable Disk Quota in VPS
If you are experiencing an issue with OpenVZ VPS disk quota, please make sure that the following values are set in the VPS conf.
VPS conf Location : /etc/vz/conf/VEId.conf
In the main node, do the following steps.
1) #grep DISK_QUOTA /etc/vz/conf/VEId.conf
If no disk quota value has found ot it is disabled, change the value to
DISK_QUOTA=yes
2) Check that disk quota is enabled in the main server itself.
grep DISK_QUOTA /etc/sysconfig/vz
If not enable the value to yes in the conf.
DISK_QUOTA=yes
3) Check for the value quotaugidlimit .
#grep -i quotaugidlimit /etc/vz/conf/veid.conf
4) Check the quota module "vzdquota" is loaded or notin main node.
# lsmod |grep -i vzdquota
5) You can set the value quotaugidlimit from the main node using the below command.
vzctl set veid –quotaugidlimit 500 –save
6) Make sure to reboot the mentioned node from the main node.
vzctl restart veid
7) Enter the the node for which you are experiencing the problem.
vzctl enter veid
Type the command 'mount'. It should give a similar output.
# mount
/dev/simfs on / type reiserfs (rw,usrquota,grpquota)
/proc on /proc type proc (rw)
/sys on /sys type sysfs (rw)
none on /dev type tmpfs (rw)
none on /dev/pts type devpts (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
Also make sure that symbolic links are existing from aquota.group and aquota.user to their respectve physical locations.
# ll
total 64
lrwxrwxrwx 1 root root 39 Oct 19 11:41 aquota.group -> /proc/vz/vzaquota/00000030/aquota.group
lrwxrwxrwx 1 root root 38 Oct 19 11:41 aquota.user -> /proc/vz/vzaquota/00000030/aquota.user
--
VPS conf Location : /etc/vz/conf/VEId.conf
In the main node, do the following steps.
1) #grep DISK_QUOTA /etc/vz/conf/VEId.conf
If no disk quota value has found ot it is disabled, change the value to
DISK_QUOTA=yes
2) Check that disk quota is enabled in the main server itself.
grep DISK_QUOTA /etc/sysconfig/vz
If not enable the value to yes in the conf.
DISK_QUOTA=yes
3) Check for the value quotaugidlimit .
#grep -i quotaugidlimit /etc/vz/conf/veid.conf
4) Check the quota module "vzdquota" is loaded or notin main node.
# lsmod |grep -i vzdquota
5) You can set the value quotaugidlimit from the main node using the below command.
vzctl set veid –quotaugidlimit 500 –save
6) Make sure to reboot the mentioned node from the main node.
vzctl restart veid
7) Enter the the node for which you are experiencing the problem.
vzctl enter veid
Type the command 'mount'. It should give a similar output.
# mount
/dev/simfs on / type reiserfs (rw,usrquota,grpquota)
/proc on /proc type proc (rw)
/sys on /sys type sysfs (rw)
none on /dev type tmpfs (rw)
none on /dev/pts type devpts (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
Also make sure that symbolic links are existing from aquota.group and aquota.user to their respectve physical locations.
# ll
total 64
lrwxrwxrwx 1 root root 39 Oct 19 11:41 aquota.group -> /proc/vz/vzaquota/00000030/aquota.group
lrwxrwxrwx 1 root root 38 Oct 19 11:41 aquota.user -> /proc/vz/vzaquota/00000030/aquota.user
--
FasCGi Link
http://www.r0xarena.com/blog/installing-php-529-with-fastcgi-from-source-in-centos-53/
http://www.fastcgi.com
http://www.seaoffire.net/fcgi-faq.html#II
http://www.fastcgi.com
http://www.seaoffire.net/fcgi-faq.html#II
Install PHP,APACHE,MySQL - bash script
http://www.robertswarthout.com/2007/01/install-of-apache-223-mysql-5018-and-php-520/
64 bit version
---------------------
#!/bin/sh
# Abort on any errors
set -e
# Where do you want all this stuff built?
SRCDIR=/home/support/software/source
# Unpack our large file that contains all needed packages that are not going to be obtained from yum
#rm -Rf ${SRCDIR}
#gzip -d nbs_core_install.gz
#cd nbs_core_install/
#cp -R * ${SRCDIR}/
# Install gcc
yum -y install gcc
# Install cc
yum -y install cc
# Install libtool
yum -y install libtool
# Extract OpenSSH and install openssh and the sshd in the init.d
cd ${SRCDIR}
tar xzf ${SRCDIR}/openssh-4.5p1.tar.gz
cd ${SRCDIR}/openssh-4.5p1
./configure
make
make install
cp -f ${SRCDIR}/sshd.init.d /etc/rc.d/init.d/sshd
chmod 755 /etc/rc.d/init.d/sshd
rm -f /etc/rc.d/rc3.d/S70sshd
cd /etc/rc.d/rc3.d && ln -s /etc/rc.d/init.d/sshd S70sshd
rm -Rf ${SRCDIR}/openssh-4.5p1
# Install libjpeg-devel
yum -y install libjpeg-devel
# Install jpeg.v6b
cd ${SRCDIR}
tar xzf ${SRCDIR}/jpegsrc.v6b.tar.gz
cd ${SRCDIR}/jpeg-6b
cp /usr/share/libtool/config.guess ./
cp /usr/share/libtool/config.sub ./
./configure –enable-shared
make libdir=/usr/lib64
install -d /usr/local/man/man1
make libdir=/usr/lib64 install
rm -Rf ${SRCDIR}/jpeg-6b
# Install libtiff & libtiff-devel
yum -y install libtiff libtiff-devel
# Install Ghostscript (needed for ImageMagick)
yum -y install ghostscript
# Install ImageMagick
yum -y install ImageMagick
# Install libxml2
cd ${SRCDIR}
tar xzf ${SRCDIR}/libxml2-2.6.27.tar.gz
cd ${SRCDIR}/libxml2-2.6.27
./configure –enable-shared
make
make install
make tests
rm -Rf ${SRCDIR}/libxml2-2.6.27
# Install libxslt
cd ${SRCDIR}
tar xzf ${SRCDIR}/libxslt-1.1.19.tar.gz
cd ${SRCDIR}/libxslt-1.1.19
./configure –prefix=/usr
make
make install
rm -Rf ${SRCDIR}/libxslt-1.1.19
# Install zlib
cd ${SRCDIR}
tar xzf ${SRCDIR}/zlib-1.2.3.tar.gz
cd ${SRCDIR}/zlib-1.2.3
./configure –shared –prefix=/usr
make
make install
rm -Rf ${SRCDIR}/zlib-1.2.3
# Install libmcrypt
cd ${SRCDIR}
tar xzf ${SRCDIR}/libmcrypt-2.5.7.tar.gz
cd ${SRCDIR}/libmcrypt-2.5.7
./configure –disable-posix-threads –prefix=/usr
make
make install
# Install libmcrypt lltdl
cd ${SRCDIR}/libmcrypt-2.5.7/libltdl
./configure –prefix=/usr –enable-ltdl-install
make
make install
rm -Rf ${SRCDIR}/libmcrypt-2.5.7
# Install mhash
cd ${SRCDIR}
tar xzf ${SRCDIR}/mhash-0.9.7.1.tar.gz
cd ${SRCDIR}/mhash-0.9.7.1
./configure –prefix=/usr
make
make install
rm -Rf ${SRCDIR}/mhash-0.9.7.1
# Install Freetype
cd ${SRCDIR}
tar xzf ${SRCDIR}/freetype-2.2.1.tar.gz
cd ${SRCDIR}/freetype-2.2.1
./configure
make
make install
rm -Rf ${SRCDIR}/freetype-2.2.1
# Install libidn
cd ${SRCDIR}
tar xzf ${SRCDIR}/libidn-0.6.9.tar.gz
cd ${SRCDIR}/libidn-0.6.9
./configure –with-iconv-prefix=/usr –prefix=/usr
make
make install
rm -Rf ${SRCDIR}/libidn-0.6.9
# Install OpenSSL
cd ${SRCDIR}
tar xzf ${SRCDIR}/openssl-0.9.8d.tar.gz
cd ${SRCDIR}/openssl-0.9.8d
./config
make
make test
make install
rm -Rf ${SRCDIR}/openssl-0.9.8d
# Install cURL
cd ${SRCDIR}
tar xzf ${SRCDIR}/curl-7.15.0.tar.gz
cd ${SRCDIR}/curl-7.15.0
./configure –enable-ipv6 –enable-cookies –enable-crypto-auth –prefix=/usr
make
make install
rm -Rf ${SRCDIR}/curl-7.15.0
# Install c-client (IMAP)
cd ${SRCDIR}
tar xzf ${SRCDIR}/imap-2004g.tar.Z
cd ${SRCDIR}/imap-2004g
make lrh
cp c-client/c-client.a /usr/lib/libc-client.a
cp c-client/*.h /usr/include
rm -Rf ${SRCDIR}/imap-2004g
# Install Apache
cd ${SRCDIR}
tar xzf ${SRCDIR}/httpd-2.2.3.tar.gz
cd ${SRCDIR}/httpd-2.2.3
./configure –enable-rewrite –enable-ssl –enable-deflate –enable-so –enable-proxy –prefix=/usr/local/apache2
make
make install
rm -Rf ${SRCDIR}/httpd-2.2.3
# Install httpd init.d file
cd ${SRCDIR}
rm -f /etc/rc.d/init.d/httpd
cp httpd.init.d /etc/rc.d/init.d/httpd
chmod 755 /etc/rc.d/init.d/httpd
rm -f /etc/rc.d/rc3.d/K*httpd
rm -f /etc/rc.d/rc3.d/S*httpd
cd /etc/rc.d/rc3.d && ln -s /etc/rc.d/init.d/httpd F15sshd
# Install libpng
cd ${SRCDIR}
tar xzf ${SRCDIR}/libpng-1.2.15.tar.gz
cd ${SRCDIR}/libpng-1.2.15
./configure
make
make install
rm -Rf ${SRCDIR}/libpng-1.2.15
# Install MySQL
yum -y install perl-DBI
cd ${SRCDIR}
rpm -i MySQL-server-standard-5.0.18-0.rhel4.x86_64.rpm
rpm -i MySQL-client-standard-5.0.18-0.rhel4.x86_64.rpm
rpm -i MySQL-devel-standard-5.0.18-0.rhel4.x86_64.rpm
rpm -i MySQL-shared-standard-5.0.18-0.rhel4.x86_64.rpm
rpm -i MySQL-standard-debuginfo-5.0.18-0.rhel4.x86_64.rpm
ln -s /usr/lib64/mysql/libmysqlclient.a /usr/lib/libmysqlclient.a
# Install Flex
yum -y install flex
# Install libdv
yum -y install libdv
# Install re2c needed for php pfro compile
cd ${SRCDIR}
tar zxf ${SRCDIR}/re2c-0.11.0.tar.gz
cd ${SRCDIR}/re2c-0.11.0
./configure
make
make install
# Install PHP
cd ${SRCDIR}
tar xzf ${SRCDIR}/php-5.2.0.tar.gz
cd ${SRCDIR}/php-5.2.0
./configure ‘–prefix=/usr’ ‘–exec-prefix=/usr’ ‘–bindir=/usr/bin’ ‘–sbindir=/usr/sbin’ ‘–sysconfdir=/etc’ ‘–datadir=/usr/share’ ‘–includedir=/usr/include’ ‘–libdir=/usr/lib64′ ‘–libexecdir=/usr/libexec’ ‘–localstatedir=/var’ ‘–sharedstatedir=/usr/com’ ‘–mandir=/usr/share/man’ ‘–infodir=/usr/share/info’ ‘–with-mcrypt=/usr’ ‘–with-config-file-path=/etc’ ‘–with-bz2′ ‘–with-curl’ ‘–with-curl-ssl’ ‘–with-exec-dir=/usr/bin’ ‘–with-freetype-dir=/usr’ ‘–with-png-dir=/usr’ ‘–with-gd’ ‘–with-ttf’ ‘–with-gdbm’ ‘–with-gettext’ ‘–with-ncurses’ ‘–with-gmp’ ‘–with-iconv’ ‘–with-jpeg’ ‘–with-openssl’ ‘–with-png’ ‘–with-regex=system’ ‘–with-xsl=/usr’ ‘–with-expat-dir=/usr’ ‘–with-zlib’ ‘–with-layout=GNU’ ‘–with-kerberos=/usr/kerberos’ ‘–with-apxs2=/usr/local/apache2/bin/apxs’ ‘–without-oci8′ ‘–enable-inline-optimization’ ‘–enable-gd-native-ttf’ ‘–enable-exif’ ‘–enable-ftp’ ‘–enable-sockets’ ‘–enable-trans-sid’ ‘–enable-memory-limit’ ‘–disable-rpath’ ‘–disable-debug’ ‘–with-mysql=/usr/local/mysql’ ‘–with-mysqli=/usr/bin/mysql_config’
make
make install
cp php.ini-dist /etc/php.ini
rm -Rf ${SRCDIR}/php-5.2.0
# Install Memcache PHP Extension
cd ${SRCDIR}
tar xzf ${SRCDIR}/memcache-2.1.0.tgz
cd ${SRCDIR}/memcache-2.1.0
phpize
./configure
make
mkdir /usr/local/phpextensions/
cp ${SRCDIR}/memcache-2.1.0/modules/memcache.so /usr/local/phpextensions/
rm -Rf ${SRCDIR}/memcache-2.1.0
# Install APC (php cache)
cd ${SRCDIR}
tar xzf ${SRCDIR}/APC-3.0.12p2.tgz
cd ${SRCDIR}/APC-3.0.12p2
phpize
./configure –enable-apc-mmap=yes –with-apxs2=/usr/local/apache2/bin/apxs
make
cd ${SRCDIR}/APC-3.0.12p2/modules/
cp apc.so /usr/local/phpextensions
# Install libevent-devel
yum -y install libevent-devel
# Install Memcache Daemon
cd ${SRCDIR}
tar xzf ${SRCDIR}/memcached-1.2.1.tar.gz
cd ${SRCDIR}/memcached-1.2.1
./configure
make
make install
rm -Rf ${SRCDIR}/memcached-1.2.1
# Install BRUTIS
cd ${SRCDIR}
cp monitor.php /home/support/
chmod 700 /home/support/monitor.php
cp ${SRCDIR}/crontab_root /var/spool/cron/root
# Install perl-Net-SSLeay
yum -y install perl-Net-SSLeay
# Install Webmin
cd ${SRCDIR}
rpm -U webmin-1.310-1.noarch.rpm
echo
echo
echo ———- INSTALL COMPLETE! ———-
echo
echo
32 bit version
----------------------
#!/bin/sh
# Abort on any errors
set -e
# Where do you want all this stuff built?
SRCDIR=/home/support/software/source
# Install gcc
yum -y install gcc
# Install libtool
yum -y install libtool
# Extract OpenSSH and install openssh and the sshd in the init.d
cd ${SRCDIR}
tar xzf ${SRCDIR}/openssh-4.5p1.tar.gz
cd ${SRCDIR}/openssh-4.5p1
./configure
make
make install
cp -f ${SRCDIR}/sshd.init.d /etc/rc.d/init.d/sshd
chmod 755 /etc/rc.d/init.d/sshd
rm -f /etc/rc.d/rc3.d/S70sshd
cd /etc/rc.d/rc3.d && ln -s /etc/rc.d/init.d/sshd S70sshd
rm -Rf ${SRCDIR}/openssh-4.5p1
# Install libjpeg-devel
yum -y install libjpeg-devel
# Install jpeg.v6b
cd ${SRCDIR}
tar xzf ${SRCDIR}/jpegsrc.v6b.tar.gz
cd ${SRCDIR}/jpeg-6b
cp /usr/share/libtool/config.guess ./
cp /usr/share/libtool/config.sub ./
./configure –enable-shared
make
install -d /usr/local/man/man1
make install
rm -Rf ${SRCDIR}/jpeg-6b
# Install libtiff & libtiff-devel
yum -y install libtiff libtiff-devel
# Install Ghostscript (needed for ImageMagick)
yum -y install ghostscript
# Install ImageMagick
yum -y install ImageMagick
# Install libxml2
cd ${SRCDIR}
tar xzf ${SRCDIR}/libxml2-2.6.27.tar.gz
cd ${SRCDIR}/libxml2-2.6.27
./configure –enable-shared
make
make install
make tests
rm -Rf ${SRCDIR}/libxml2-2.6.27
# Install libxslt
cd ${SRCDIR}
tar xzf ${SRCDIR}/libxslt-1.1.19.tar.gz
cd ${SRCDIR}/libxslt-1.1.19
./configure –prefix=/usr
make
make install
rm -Rf ${SRCDIR}/libxslt-1.1.19
# Install libmcrypt
cd ${SRCDIR}
tar xzf ${SRCDIR}/libmcrypt-2.5.7.tar.gz
cd ${SRCDIR}/libmcrypt-2.5.7
./configure –prefix=/usr
make
make install
# Install Freetype
cd ${SRCDIR}
tar xzf ${SRCDIR}/freetype-2.2.1.tar.gz
cd ${SRCDIR}/freetype-2.2.1
./configure
make
make install
rm -Rf ${SRCDIR}/freetype-2.2.1
# Install OpenSSL
cd ${SRCDIR}
tar xzf ${SRCDIR}/openssl-0.9.8d.tar.gz
cd ${SRCDIR}/openssl-0.9.8d
./config
make
make test
make install
rm -Rf ${SRCDIR}/openssl-0.9.8d
# Install cURL
cd ${SRCDIR}
tar xzf ${SRCDIR}/curl-7.15.0.tar.gz
cd ${SRCDIR}/curl-7.15.0
./configure –enable-ipv6 –enable-cookies –enable-crypto-auth –prefix=/usr
make
make install
rm -Rf ${SRCDIR}/curl-7.15.0
# Install c-client (IMAP)
cd ${SRCDIR}
tar xzf ${SRCDIR}/imap-2004g.tar.Z
cd ${SRCDIR}/imap-2004g
make lrh
cp c-client/c-client.a /usr/lib/libc-client.a
cp c-client/*.h /usr/include
rm -Rf ${SRCDIR}/imap-2004g
# Install Apache
cd ${SRCDIR}
tar xzf ${SRCDIR}/httpd-2.2.3.tar.gz
cd ${SRCDIR}/httpd-2.2.3
./configure –enable-rewrite –enable-ssl –enable-deflate –enable-so –enable-proxy –prefix=/usr/local/apache2
make
make install
rm -Rf ${SRCDIR}/httpd-2.2.3
# Install httpd init.d file
cd ${SRCDIR}
rm -f /etc/rc.d/init.d/httpd
cp httpd.init.d /etc/rc.d/init.d/httpd
chmod 755 /etc/rc.d/init.d/httpd
rm -f /etc/rc.d/rc3.d/K*httpd
rm -f /etc/rc.d/rc3.d/S*httpd
cd /etc/rc.d/rc3.d && ln -s /etc/rc.d/init.d/httpd F15sshd
# Install libpng
cd ${SRCDIR}
tar xzf ${SRCDIR}/libpng-1.2.15.tar.gz
cd ${SRCDIR}/libpng-1.2.15
./configure
make
make install
rm -Rf ${SRCDIR}/libpng-1.2.15
# Install MySQL
yum -y install perl-DBI
cd ${SRCDIR}
rpm -i MySQL-server-standard-5.0.18-0.rhel3.i386.rpm
rpm -i MySQL-client-standard-5.0.18-0.rhel3.i386.rpm
rpm -i MySQL-devel-standard-5.0.18-0.rhel3.i386.rpm
rpm -i MySQL-shared-standard-5.0.18-0.rhel3.i386.rpm
# Install Flex
yum -y install flex
# Install libdv
yum -y install libdv
# Install PHP
cd ${SRCDIR}
tar xzf ${SRCDIR}/php-5.2.0.tar.gz
cd ${SRCDIR}/php-5.2.0
./configure ‘–prefix=/usr’ ‘–exec-prefix=/usr’ ‘–bindir=/usr/bin’ ‘–sbindir=/usr/sbin’ ‘–sysconfdir=/etc’ ‘–datadir=/usr/share’ ‘–includedir=/usr/include’ ‘–libdir=/usr/lib64′ ‘–libexecdir=/usr/libexec’ ‘–localstatedir=/var’ ‘–sharedstatedir=/usr/com’ ‘–mandir=/usr/share/man’ ‘–infodir=/usr/share/info’ ‘–with-mcrypt=/usr’ ‘–with-config-file-path=/etc’ ‘–with-bz2′ ‘–with-curl’ ‘–with-curl-ssl’ ‘–with-exec-dir=/usr/bin’ ‘–with-freetype-dir=/usr’ ‘–with-png-dir=/usr’ ‘–with-gd’ ‘–with-ttf’ ‘–with-gdbm’ ‘–with-gettext’ ‘–with-ncurses’ ‘–with-gmp’ ‘–with-iconv’ ‘–with-jpeg’ ‘–with-openssl’ ‘–with-png’ ‘–with-regex=system’ ‘–with-xsl=/usr’ ‘–with-expat-dir=/usr’ ‘–with-zlib’ ‘–with-layout=GNU’ ‘–with-kerberos=/usr/kerberos’ ‘–with-apxs2=/usr/local/apache2/bin/apxs’ ‘–without-oci8′ ‘–enable-inline-optimization’ ‘–enable-gd-native-ttf’ ‘–enable-exif’ ‘–enable-ftp’ ‘–enable-sockets’ ‘–enable-trans-sid’ ‘–enable-memory-limit’ ‘–disable-rpath’ ‘–disable-debug’ ‘–with-mysql=/usr/local/mysql’ ‘–with-mysqli=/usr/bin/mysql_config’
make
make install
cp php.ini-dist /etc/php.ini
rm -Rf ${SRCDIR}/php-5.2.0
# Install Memcache PHP Extension
cd ${SRCDIR}
tar xzf ${SRCDIR}/memcache-2.1.0.tgz
cd ${SRCDIR}/memcache-2.1.0
phpize
./configure
make
mkdir /usr/local/phpextensions/
cp ${SRCDIR}/memcache-2.1.0/modules/memcache.so /usr/local/phpextensions/
rm -Rf ${SRCDIR}/memcache-2.1.0
# Install APC (php cache)
cd ${SRCDIR}
tar xzf ${SRCDIR}/APC-3.0.12p2.tgz
cd ${SRCDIR}/APC-3.0.12p2
phpize
./configure –enable-apc-mmap=yes –with-apxs2=/usr/local/apache2/bin/apxs
make
cd ${SRCDIR}/APC-3.0.12p2/modules/
cp apc.so /usr/local/phpextensions
# Install libevent-devel
yum -y install libevent-devel
# Install Memcache Daemon
cd ${SRCDIR}
tar xzf ${SRCDIR}/memcached-1.2.1.tar.gz
cd ${SRCDIR}/memcached-1.2.1
./configure
make
make install
rm -Rf ${SRCDIR}/memcached-1.2.1
# Install BRUTIS
cd ${SRCDIR}
cp monitor.php /home/support/
chmod 700 /home/support/monitor.php
cp ${SRCDIR}/crontab_root /var/spool/cron/root
# Install perl-Net-SSLeay
yum -y install perl-Net-SSLeay
# Install Webmin
cd ${SRCDIR}
rpm -U webmin-1.310-1.noarch.rpm
echo
echo
echo ———- INSTALL COMPLETE! ———-
echo
echo
64 bit version
---------------------
#!/bin/sh
# Abort on any errors
set -e
# Where do you want all this stuff built?
SRCDIR=/home/support/software/source
# Unpack our large file that contains all needed packages that are not going to be obtained from yum
#rm -Rf ${SRCDIR}
#gzip -d nbs_core_install.gz
#cd nbs_core_install/
#cp -R * ${SRCDIR}/
# Install gcc
yum -y install gcc
# Install cc
yum -y install cc
# Install libtool
yum -y install libtool
# Extract OpenSSH and install openssh and the sshd in the init.d
cd ${SRCDIR}
tar xzf ${SRCDIR}/openssh-4.5p1.tar.gz
cd ${SRCDIR}/openssh-4.5p1
./configure
make
make install
cp -f ${SRCDIR}/sshd.init.d /etc/rc.d/init.d/sshd
chmod 755 /etc/rc.d/init.d/sshd
rm -f /etc/rc.d/rc3.d/S70sshd
cd /etc/rc.d/rc3.d && ln -s /etc/rc.d/init.d/sshd S70sshd
rm -Rf ${SRCDIR}/openssh-4.5p1
# Install libjpeg-devel
yum -y install libjpeg-devel
# Install jpeg.v6b
cd ${SRCDIR}
tar xzf ${SRCDIR}/jpegsrc.v6b.tar.gz
cd ${SRCDIR}/jpeg-6b
cp /usr/share/libtool/config.guess ./
cp /usr/share/libtool/config.sub ./
./configure –enable-shared
make libdir=/usr/lib64
install -d /usr/local/man/man1
make libdir=/usr/lib64 install
rm -Rf ${SRCDIR}/jpeg-6b
# Install libtiff & libtiff-devel
yum -y install libtiff libtiff-devel
# Install Ghostscript (needed for ImageMagick)
yum -y install ghostscript
# Install ImageMagick
yum -y install ImageMagick
# Install libxml2
cd ${SRCDIR}
tar xzf ${SRCDIR}/libxml2-2.6.27.tar.gz
cd ${SRCDIR}/libxml2-2.6.27
./configure –enable-shared
make
make install
make tests
rm -Rf ${SRCDIR}/libxml2-2.6.27
# Install libxslt
cd ${SRCDIR}
tar xzf ${SRCDIR}/libxslt-1.1.19.tar.gz
cd ${SRCDIR}/libxslt-1.1.19
./configure –prefix=/usr
make
make install
rm -Rf ${SRCDIR}/libxslt-1.1.19
# Install zlib
cd ${SRCDIR}
tar xzf ${SRCDIR}/zlib-1.2.3.tar.gz
cd ${SRCDIR}/zlib-1.2.3
./configure –shared –prefix=/usr
make
make install
rm -Rf ${SRCDIR}/zlib-1.2.3
# Install libmcrypt
cd ${SRCDIR}
tar xzf ${SRCDIR}/libmcrypt-2.5.7.tar.gz
cd ${SRCDIR}/libmcrypt-2.5.7
./configure –disable-posix-threads –prefix=/usr
make
make install
# Install libmcrypt lltdl
cd ${SRCDIR}/libmcrypt-2.5.7/libltdl
./configure –prefix=/usr –enable-ltdl-install
make
make install
rm -Rf ${SRCDIR}/libmcrypt-2.5.7
# Install mhash
cd ${SRCDIR}
tar xzf ${SRCDIR}/mhash-0.9.7.1.tar.gz
cd ${SRCDIR}/mhash-0.9.7.1
./configure –prefix=/usr
make
make install
rm -Rf ${SRCDIR}/mhash-0.9.7.1
# Install Freetype
cd ${SRCDIR}
tar xzf ${SRCDIR}/freetype-2.2.1.tar.gz
cd ${SRCDIR}/freetype-2.2.1
./configure
make
make install
rm -Rf ${SRCDIR}/freetype-2.2.1
# Install libidn
cd ${SRCDIR}
tar xzf ${SRCDIR}/libidn-0.6.9.tar.gz
cd ${SRCDIR}/libidn-0.6.9
./configure –with-iconv-prefix=/usr –prefix=/usr
make
make install
rm -Rf ${SRCDIR}/libidn-0.6.9
# Install OpenSSL
cd ${SRCDIR}
tar xzf ${SRCDIR}/openssl-0.9.8d.tar.gz
cd ${SRCDIR}/openssl-0.9.8d
./config
make
make test
make install
rm -Rf ${SRCDIR}/openssl-0.9.8d
# Install cURL
cd ${SRCDIR}
tar xzf ${SRCDIR}/curl-7.15.0.tar.gz
cd ${SRCDIR}/curl-7.15.0
./configure –enable-ipv6 –enable-cookies –enable-crypto-auth –prefix=/usr
make
make install
rm -Rf ${SRCDIR}/curl-7.15.0
# Install c-client (IMAP)
cd ${SRCDIR}
tar xzf ${SRCDIR}/imap-2004g.tar.Z
cd ${SRCDIR}/imap-2004g
make lrh
cp c-client/c-client.a /usr/lib/libc-client.a
cp c-client/*.h /usr/include
rm -Rf ${SRCDIR}/imap-2004g
# Install Apache
cd ${SRCDIR}
tar xzf ${SRCDIR}/httpd-2.2.3.tar.gz
cd ${SRCDIR}/httpd-2.2.3
./configure –enable-rewrite –enable-ssl –enable-deflate –enable-so –enable-proxy –prefix=/usr/local/apache2
make
make install
rm -Rf ${SRCDIR}/httpd-2.2.3
# Install httpd init.d file
cd ${SRCDIR}
rm -f /etc/rc.d/init.d/httpd
cp httpd.init.d /etc/rc.d/init.d/httpd
chmod 755 /etc/rc.d/init.d/httpd
rm -f /etc/rc.d/rc3.d/K*httpd
rm -f /etc/rc.d/rc3.d/S*httpd
cd /etc/rc.d/rc3.d && ln -s /etc/rc.d/init.d/httpd F15sshd
# Install libpng
cd ${SRCDIR}
tar xzf ${SRCDIR}/libpng-1.2.15.tar.gz
cd ${SRCDIR}/libpng-1.2.15
./configure
make
make install
rm -Rf ${SRCDIR}/libpng-1.2.15
# Install MySQL
yum -y install perl-DBI
cd ${SRCDIR}
rpm -i MySQL-server-standard-5.0.18-0.rhel4.x86_64.rpm
rpm -i MySQL-client-standard-5.0.18-0.rhel4.x86_64.rpm
rpm -i MySQL-devel-standard-5.0.18-0.rhel4.x86_64.rpm
rpm -i MySQL-shared-standard-5.0.18-0.rhel4.x86_64.rpm
rpm -i MySQL-standard-debuginfo-5.0.18-0.rhel4.x86_64.rpm
ln -s /usr/lib64/mysql/libmysqlclient.a /usr/lib/libmysqlclient.a
# Install Flex
yum -y install flex
# Install libdv
yum -y install libdv
# Install re2c needed for php pfro compile
cd ${SRCDIR}
tar zxf ${SRCDIR}/re2c-0.11.0.tar.gz
cd ${SRCDIR}/re2c-0.11.0
./configure
make
make install
# Install PHP
cd ${SRCDIR}
tar xzf ${SRCDIR}/php-5.2.0.tar.gz
cd ${SRCDIR}/php-5.2.0
./configure ‘–prefix=/usr’ ‘–exec-prefix=/usr’ ‘–bindir=/usr/bin’ ‘–sbindir=/usr/sbin’ ‘–sysconfdir=/etc’ ‘–datadir=/usr/share’ ‘–includedir=/usr/include’ ‘–libdir=/usr/lib64′ ‘–libexecdir=/usr/libexec’ ‘–localstatedir=/var’ ‘–sharedstatedir=/usr/com’ ‘–mandir=/usr/share/man’ ‘–infodir=/usr/share/info’ ‘–with-mcrypt=/usr’ ‘–with-config-file-path=/etc’ ‘–with-bz2′ ‘–with-curl’ ‘–with-curl-ssl’ ‘–with-exec-dir=/usr/bin’ ‘–with-freetype-dir=/usr’ ‘–with-png-dir=/usr’ ‘–with-gd’ ‘–with-ttf’ ‘–with-gdbm’ ‘–with-gettext’ ‘–with-ncurses’ ‘–with-gmp’ ‘–with-iconv’ ‘–with-jpeg’ ‘–with-openssl’ ‘–with-png’ ‘–with-regex=system’ ‘–with-xsl=/usr’ ‘–with-expat-dir=/usr’ ‘–with-zlib’ ‘–with-layout=GNU’ ‘–with-kerberos=/usr/kerberos’ ‘–with-apxs2=/usr/local/apache2/bin/apxs’ ‘–without-oci8′ ‘–enable-inline-optimization’ ‘–enable-gd-native-ttf’ ‘–enable-exif’ ‘–enable-ftp’ ‘–enable-sockets’ ‘–enable-trans-sid’ ‘–enable-memory-limit’ ‘–disable-rpath’ ‘–disable-debug’ ‘–with-mysql=/usr/local/mysql’ ‘–with-mysqli=/usr/bin/mysql_config’
make
make install
cp php.ini-dist /etc/php.ini
rm -Rf ${SRCDIR}/php-5.2.0
# Install Memcache PHP Extension
cd ${SRCDIR}
tar xzf ${SRCDIR}/memcache-2.1.0.tgz
cd ${SRCDIR}/memcache-2.1.0
phpize
./configure
make
mkdir /usr/local/phpextensions/
cp ${SRCDIR}/memcache-2.1.0/modules/memcache.so /usr/local/phpextensions/
rm -Rf ${SRCDIR}/memcache-2.1.0
# Install APC (php cache)
cd ${SRCDIR}
tar xzf ${SRCDIR}/APC-3.0.12p2.tgz
cd ${SRCDIR}/APC-3.0.12p2
phpize
./configure –enable-apc-mmap=yes –with-apxs2=/usr/local/apache2/bin/apxs
make
cd ${SRCDIR}/APC-3.0.12p2/modules/
cp apc.so /usr/local/phpextensions
# Install libevent-devel
yum -y install libevent-devel
# Install Memcache Daemon
cd ${SRCDIR}
tar xzf ${SRCDIR}/memcached-1.2.1.tar.gz
cd ${SRCDIR}/memcached-1.2.1
./configure
make
make install
rm -Rf ${SRCDIR}/memcached-1.2.1
# Install BRUTIS
cd ${SRCDIR}
cp monitor.php /home/support/
chmod 700 /home/support/monitor.php
cp ${SRCDIR}/crontab_root /var/spool/cron/root
# Install perl-Net-SSLeay
yum -y install perl-Net-SSLeay
# Install Webmin
cd ${SRCDIR}
rpm -U webmin-1.310-1.noarch.rpm
echo
echo
echo ———- INSTALL COMPLETE! ———-
echo
echo
32 bit version
----------------------
#!/bin/sh
# Abort on any errors
set -e
# Where do you want all this stuff built?
SRCDIR=/home/support/software/source
# Install gcc
yum -y install gcc
# Install libtool
yum -y install libtool
# Extract OpenSSH and install openssh and the sshd in the init.d
cd ${SRCDIR}
tar xzf ${SRCDIR}/openssh-4.5p1.tar.gz
cd ${SRCDIR}/openssh-4.5p1
./configure
make
make install
cp -f ${SRCDIR}/sshd.init.d /etc/rc.d/init.d/sshd
chmod 755 /etc/rc.d/init.d/sshd
rm -f /etc/rc.d/rc3.d/S70sshd
cd /etc/rc.d/rc3.d && ln -s /etc/rc.d/init.d/sshd S70sshd
rm -Rf ${SRCDIR}/openssh-4.5p1
# Install libjpeg-devel
yum -y install libjpeg-devel
# Install jpeg.v6b
cd ${SRCDIR}
tar xzf ${SRCDIR}/jpegsrc.v6b.tar.gz
cd ${SRCDIR}/jpeg-6b
cp /usr/share/libtool/config.guess ./
cp /usr/share/libtool/config.sub ./
./configure –enable-shared
make
install -d /usr/local/man/man1
make install
rm -Rf ${SRCDIR}/jpeg-6b
# Install libtiff & libtiff-devel
yum -y install libtiff libtiff-devel
# Install Ghostscript (needed for ImageMagick)
yum -y install ghostscript
# Install ImageMagick
yum -y install ImageMagick
# Install libxml2
cd ${SRCDIR}
tar xzf ${SRCDIR}/libxml2-2.6.27.tar.gz
cd ${SRCDIR}/libxml2-2.6.27
./configure –enable-shared
make
make install
make tests
rm -Rf ${SRCDIR}/libxml2-2.6.27
# Install libxslt
cd ${SRCDIR}
tar xzf ${SRCDIR}/libxslt-1.1.19.tar.gz
cd ${SRCDIR}/libxslt-1.1.19
./configure –prefix=/usr
make
make install
rm -Rf ${SRCDIR}/libxslt-1.1.19
# Install libmcrypt
cd ${SRCDIR}
tar xzf ${SRCDIR}/libmcrypt-2.5.7.tar.gz
cd ${SRCDIR}/libmcrypt-2.5.7
./configure –prefix=/usr
make
make install
# Install Freetype
cd ${SRCDIR}
tar xzf ${SRCDIR}/freetype-2.2.1.tar.gz
cd ${SRCDIR}/freetype-2.2.1
./configure
make
make install
rm -Rf ${SRCDIR}/freetype-2.2.1
# Install OpenSSL
cd ${SRCDIR}
tar xzf ${SRCDIR}/openssl-0.9.8d.tar.gz
cd ${SRCDIR}/openssl-0.9.8d
./config
make
make test
make install
rm -Rf ${SRCDIR}/openssl-0.9.8d
# Install cURL
cd ${SRCDIR}
tar xzf ${SRCDIR}/curl-7.15.0.tar.gz
cd ${SRCDIR}/curl-7.15.0
./configure –enable-ipv6 –enable-cookies –enable-crypto-auth –prefix=/usr
make
make install
rm -Rf ${SRCDIR}/curl-7.15.0
# Install c-client (IMAP)
cd ${SRCDIR}
tar xzf ${SRCDIR}/imap-2004g.tar.Z
cd ${SRCDIR}/imap-2004g
make lrh
cp c-client/c-client.a /usr/lib/libc-client.a
cp c-client/*.h /usr/include
rm -Rf ${SRCDIR}/imap-2004g
# Install Apache
cd ${SRCDIR}
tar xzf ${SRCDIR}/httpd-2.2.3.tar.gz
cd ${SRCDIR}/httpd-2.2.3
./configure –enable-rewrite –enable-ssl –enable-deflate –enable-so –enable-proxy –prefix=/usr/local/apache2
make
make install
rm -Rf ${SRCDIR}/httpd-2.2.3
# Install httpd init.d file
cd ${SRCDIR}
rm -f /etc/rc.d/init.d/httpd
cp httpd.init.d /etc/rc.d/init.d/httpd
chmod 755 /etc/rc.d/init.d/httpd
rm -f /etc/rc.d/rc3.d/K*httpd
rm -f /etc/rc.d/rc3.d/S*httpd
cd /etc/rc.d/rc3.d && ln -s /etc/rc.d/init.d/httpd F15sshd
# Install libpng
cd ${SRCDIR}
tar xzf ${SRCDIR}/libpng-1.2.15.tar.gz
cd ${SRCDIR}/libpng-1.2.15
./configure
make
make install
rm -Rf ${SRCDIR}/libpng-1.2.15
# Install MySQL
yum -y install perl-DBI
cd ${SRCDIR}
rpm -i MySQL-server-standard-5.0.18-0.rhel3.i386.rpm
rpm -i MySQL-client-standard-5.0.18-0.rhel3.i386.rpm
rpm -i MySQL-devel-standard-5.0.18-0.rhel3.i386.rpm
rpm -i MySQL-shared-standard-5.0.18-0.rhel3.i386.rpm
# Install Flex
yum -y install flex
# Install libdv
yum -y install libdv
# Install PHP
cd ${SRCDIR}
tar xzf ${SRCDIR}/php-5.2.0.tar.gz
cd ${SRCDIR}/php-5.2.0
./configure ‘–prefix=/usr’ ‘–exec-prefix=/usr’ ‘–bindir=/usr/bin’ ‘–sbindir=/usr/sbin’ ‘–sysconfdir=/etc’ ‘–datadir=/usr/share’ ‘–includedir=/usr/include’ ‘–libdir=/usr/lib64′ ‘–libexecdir=/usr/libexec’ ‘–localstatedir=/var’ ‘–sharedstatedir=/usr/com’ ‘–mandir=/usr/share/man’ ‘–infodir=/usr/share/info’ ‘–with-mcrypt=/usr’ ‘–with-config-file-path=/etc’ ‘–with-bz2′ ‘–with-curl’ ‘–with-curl-ssl’ ‘–with-exec-dir=/usr/bin’ ‘–with-freetype-dir=/usr’ ‘–with-png-dir=/usr’ ‘–with-gd’ ‘–with-ttf’ ‘–with-gdbm’ ‘–with-gettext’ ‘–with-ncurses’ ‘–with-gmp’ ‘–with-iconv’ ‘–with-jpeg’ ‘–with-openssl’ ‘–with-png’ ‘–with-regex=system’ ‘–with-xsl=/usr’ ‘–with-expat-dir=/usr’ ‘–with-zlib’ ‘–with-layout=GNU’ ‘–with-kerberos=/usr/kerberos’ ‘–with-apxs2=/usr/local/apache2/bin/apxs’ ‘–without-oci8′ ‘–enable-inline-optimization’ ‘–enable-gd-native-ttf’ ‘–enable-exif’ ‘–enable-ftp’ ‘–enable-sockets’ ‘–enable-trans-sid’ ‘–enable-memory-limit’ ‘–disable-rpath’ ‘–disable-debug’ ‘–with-mysql=/usr/local/mysql’ ‘–with-mysqli=/usr/bin/mysql_config’
make
make install
cp php.ini-dist /etc/php.ini
rm -Rf ${SRCDIR}/php-5.2.0
# Install Memcache PHP Extension
cd ${SRCDIR}
tar xzf ${SRCDIR}/memcache-2.1.0.tgz
cd ${SRCDIR}/memcache-2.1.0
phpize
./configure
make
mkdir /usr/local/phpextensions/
cp ${SRCDIR}/memcache-2.1.0/modules/memcache.so /usr/local/phpextensions/
rm -Rf ${SRCDIR}/memcache-2.1.0
# Install APC (php cache)
cd ${SRCDIR}
tar xzf ${SRCDIR}/APC-3.0.12p2.tgz
cd ${SRCDIR}/APC-3.0.12p2
phpize
./configure –enable-apc-mmap=yes –with-apxs2=/usr/local/apache2/bin/apxs
make
cd ${SRCDIR}/APC-3.0.12p2/modules/
cp apc.so /usr/local/phpextensions
# Install libevent-devel
yum -y install libevent-devel
# Install Memcache Daemon
cd ${SRCDIR}
tar xzf ${SRCDIR}/memcached-1.2.1.tar.gz
cd ${SRCDIR}/memcached-1.2.1
./configure
make
make install
rm -Rf ${SRCDIR}/memcached-1.2.1
# Install BRUTIS
cd ${SRCDIR}
cp monitor.php /home/support/
chmod 700 /home/support/monitor.php
cp ${SRCDIR}/crontab_root /var/spool/cron/root
# Install perl-Net-SSLeay
yum -y install perl-Net-SSLeay
# Install Webmin
cd ${SRCDIR}
rpm -U webmin-1.310-1.noarch.rpm
echo
echo
echo ———- INSTALL COMPLETE! ———-
echo
echo
How to reset mysql admin password in windows plesk
Once you have logged into the server you via Remote Desktop you will need to take the following steps:
1. Go to 'Start >> Run' and type in 'services.msc'.
2. Now you will need to look for 'MySql Server' In the Services window.
3. Right click on 'MySql Server' and go to 'Properties'.
4. Once you have copied the location of the 'my.ini' file like the following example you will need to go to 'Start >> Run' and then enter in the location of the 'my.ini' file.
C:\Program Files\SWsoft\Plesk\Databases\MySQL\Data\my.ini
5. Under the '[mysqld]' section in the 'my.ini' file you will need to add the following line:
skip-grant-tables
6. Restart 'MySql Server'.
7. Then you will need to login to MySql:
cd C:\Program Files\SWsoft\Plesk\MySQL\bin
C:\Program Files\SWsoft\Plesk\MySQL\bin>mysql -u admin
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 7 to server version: 4.1.18-nt
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql> use mysql;
mysql> UPDATE mysql.user SET Password=PASSWORD('newpwd') WHERE User='admin';
mysql> FLUSH PRIVILEGES;
mysql> quit;
8. Then you will need to remove 'skip-grant-tables' from the my.ini file and restart 'MySql' after resaving the my.ini file.
Method 2
---------------
http://kb.parallels.com/en/3661
You first need to make sure you have MySQL Admin installed as part of the MySQL build.
The below article is taken from the MySQL site:
1.) Log on to your system as Administrator.
2.) Stop the MySQL server if it is running. For a server that is running as a Windows service, go to the Services manager:
Start Menu -> Control Panel -> Administrative Tools -> Services
3.) Find the MySQL service in the list, and stop it. Open a console window to get to the DOS command prompt:
Start Menu -> Run -> cmd
4.) We are assuming that you installed MySQL to `C:\\mysql'. If you installed MySQL to another location, adjust the following commands accordingly. At the DOS command prompt, execute this command:
C:\> C:\mysql\bin\mysqld-nt --skip-grant-tables
This starts the server in a special mode that does not check the grant tables to control access.
5.) Keeping the first console window open, open a second console window and execute the following commands (type each on a single line):
C:\> C:\mysql\bin\mysqladmin -u root flush-privileges password "newpwd"
C:\> C:\mysql\bin\mysqladmin -u root -p shutdown
6.) Replace ``newpwd'' with the actual root password that you want to use. The second command will prompt you to enter the new password for access. Enter the password that you assigned in the first command.
7.) Stop the MySQL server, then restart it in normal mode again. If you run the server as a service, start it from the Windows Services window. If you start the server manually, use whatever command you normally use.
8.) You should now be able to connect using the new password.
These details are also found at the following link:
http://dev.mysql.com/doc/mysql/en/resetting-permissions.html
1. Go to 'Start >> Run' and type in 'services.msc'.
2. Now you will need to look for 'MySql Server' In the Services window.
3. Right click on 'MySql Server' and go to 'Properties'.
4. Once you have copied the location of the 'my.ini' file like the following example you will need to go to 'Start >> Run' and then enter in the location of the 'my.ini' file.
C:\Program Files\SWsoft\Plesk\Databases\MySQL\Data\my.ini
5. Under the '[mysqld]' section in the 'my.ini' file you will need to add the following line:
skip-grant-tables
6. Restart 'MySql Server'.
7. Then you will need to login to MySql:
cd C:\Program Files\SWsoft\Plesk\MySQL\bin
C:\Program Files\SWsoft\Plesk\MySQL\bin>mysql -u admin
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 7 to server version: 4.1.18-nt
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql> use mysql;
mysql> UPDATE mysql.user SET Password=PASSWORD('newpwd') WHERE User='admin';
mysql> FLUSH PRIVILEGES;
mysql> quit;
8. Then you will need to remove 'skip-grant-tables' from the my.ini file and restart 'MySql' after resaving the my.ini file.
Method 2
---------------
http://kb.parallels.com/en/3661
You first need to make sure you have MySQL Admin installed as part of the MySQL build.
The below article is taken from the MySQL site:
1.) Log on to your system as Administrator.
2.) Stop the MySQL server if it is running. For a server that is running as a Windows service, go to the Services manager:
Start Menu -> Control Panel -> Administrative Tools -> Services
3.) Find the MySQL service in the list, and stop it. Open a console window to get to the DOS command prompt:
Start Menu -> Run -> cmd
4.) We are assuming that you installed MySQL to `C:\\mysql'. If you installed MySQL to another location, adjust the following commands accordingly. At the DOS command prompt, execute this command:
C:\> C:\mysql\bin\mysqld-nt --skip-grant-tables
This starts the server in a special mode that does not check the grant tables to control access.
5.) Keeping the first console window open, open a second console window and execute the following commands (type each on a single line):
C:\> C:\mysql\bin\mysqladmin -u root flush-privileges password "newpwd"
C:\> C:\mysql\bin\mysqladmin -u root -p shutdown
6.) Replace ``newpwd'' with the actual root password that you want to use. The second command will prompt you to enter the new password for access. Enter the password that you assigned in the first command.
7.) Stop the MySQL server, then restart it in normal mode again. If you run the server as a service, start it from the Windows Services window. If you start the server manually, use whatever command you normally use.
8.) You should now be able to connect using the new password.
These details are also found at the following link:
http://dev.mysql.com/doc/mysql/en/resetting-permissions.html
SPF Rule Check
Please go through this link, if you get an error like email spoofing or associated issues. You can use the below given link to verify the validity of the rule that you have created.
http://www.kitterman.com/spf/validate.html
You can check the below link to create the SPF record of your wish.
http://www.openspf.org/
Also most acceptable condition is with the '~' but not with '-'. The latter is Fail condition while the former is softfail. Latter will act like a neutral condition, ie, neither accept nor reject.
Also if you notice spamming in the server please try to grep the home directory in the exim error logs/applicable logs.
tail -f /var/log/exim_mainlog | grep "cwd=/home"
If the mail was generated from the home, then it will give hints to the spammer directory.
Also make sure to disable the php - nobody. Try to configure the same from the backend of the server.
http://www.webhostgear.com/232.html
Use following two script to catch the spammer.
1. exim -bpr | grep "<*@*>" | awk '{print $4}'|grep -v "<>" | sort | uniq -c | sort -n
That will show you the maximum no of email currently in the mail queue have from or to the email address in the mail queue with exact figure.
2. exim -bpr | grep "<*@*>" | awk '{print $4}'|grep -v "<>" |awk -F "@" '{ print $2}' | sort | uniq -c | sort -n
That will show you the maximum number of emails currently in the mail queue have for the domain or from the domain with number.
http://www.kitterman.com/spf/validate.html
You can check the below link to create the SPF record of your wish.
http://www.openspf.org/
Also most acceptable condition is with the '~' but not with '-'. The latter is Fail condition while the former is softfail. Latter will act like a neutral condition, ie, neither accept nor reject.
Also if you notice spamming in the server please try to grep the home directory in the exim error logs/applicable logs.
tail -f /var/log/exim_mainlog | grep "cwd=/home"
If the mail was generated from the home, then it will give hints to the spammer directory.
Also make sure to disable the php - nobody. Try to configure the same from the backend of the server.
http://www.webhostgear.com/232.html
Use following two script to catch the spammer.
1. exim -bpr | grep "<*@*>" | awk '{print $4}'|grep -v "<>" | sort | uniq -c | sort -n
That will show you the maximum no of email currently in the mail queue have from or to the email address in the mail queue with exact figure.
2. exim -bpr | grep "<*@*>" | awk '{print $4}'|grep -v "<>" |awk -F "@" '{ print $2}' | sort | uniq -c | sort -n
That will show you the maximum number of emails currently in the mail queue have for the domain or from the domain with number.
Libsafe Installation
1)Download it from http://fresh.t-systems-sfr.com/linux/misc/libsafe-2.0-16.tgz
2)tar xvfz libsafe-2.0-16.tgz
3)cd libsafe-2.0-16
4)make
5)make install
OVERVIEW
------------
One of the more common types of attacks directed at Linux systems comes in the form of buffer overflows, which have accounted for more than 50 percent of CERT advisories. The attacks can be quite difficult to guard against since they usually involve software flaws. The vulnerabilities reside within programs themselves and are caused when a section of memory is overwritten.
Preventing these attacks has historically involved the modification of the source code and recompilation. However, Libsafe offers another way to deal with these dangerous flaws. Libsafe is a dynamically loadable library that intercepts calls to unsafe functions and processes them so that hackers can't hijack the process and run the code of their choice. The most valuable aspect of Libsafe is that it can help you guard your Linux systems against buffer overflow vulnerabilities that have yet to be discovered.
Understanding buffer overflows
The main purpose of an attack using buffer overflow techniques is to gain access to privileged user space on a target machine. Buffer overflows can also crash a program or even cause system instability due to vulnerabilities in the software itself.
In a buffer overflow, a section of memory corresponding to a variable used by a program is overwritten. Buffer overflows have been found in all sorts of system programs and daemons such as syslogd, Sendmail, Apache, WU-FTPD, and BIND, to name but a few. Since information security has become more and more of a concern in IT, methods for avoiding, diagnosing, and documenting these exploits have improved. Yet, despite security audits and careful programming, some bugs remain present in many software programs and are not discovered until later.
Let's consider an example of a buffer overflow. When a user connects to an FTP server, the daemon displays a prompt requesting a username. We'll assume that the program is expecting a string of no more than 256 characters and is not programmed to perform an argument check. The program will work fine—that is, until a malicious individual decides to test its weaknesses by passing it a 257-character string. Now, the allocated memory space doesn’t have enough room, and the next portion up gets written over. This may not sound bad at first, but its effects can be surprising. It all depends on what is in the memory space that has been overwritten, how much of it was overwritten, and what it was overwritten with. This can be impossible to determine beforehand and can cause strange things to happen.
Buffer overflows can also be used in what are called “stack-smashing” attacks, where someone can execute his or her own code on a target system. When a program is executed, it uses an area of memory called the stack. The stack stores function arguments and local variables, among other things. If a particular variable that resides in the stack is susceptible to a buffer overflow, a hacker can use this information to gain access to the system.
Similar to buffer overflows are “format string” exploits. These also attempt to access out-of-bounds memory space to gain access to a Linux system. One way to do this is to pass special formatting characters to a print command that doesn't do any format checking. In this manner, the special characters can actually reference memory space and cause program instability. Once again, this falls into the programmers' hands and can be avoided with good coding practices. The C function sprintf(string, “%s”); is an example of good practice, while sprintf(string); would be considered unsafe, as it does not provide formatting information.
It's important to remember that a hacker needs to compromise a program that is run as root to get a root shell (which is almost always the goal of the hacker). In recent years, there has been a migration from running programs as root to using a separate user account. Many programs will create a user specifically for running program tasks and will avoid using root as much as possible. In this fashion, even if a program is compromised, the user will not have total control of the system.
A closer look at Libsafe
Libsafe is a system library that intercepts calls to specific unsafe functions and handles them securely. This allows it to handle precompiled executables, meaning that manually editing the source and recompiling (or waiting for the maintainer to do this) is not necessary. Also, and possibly more important, it will work on bugs in software programs that have not been discovered yet. It can do this because it intercepts all calls to a particular function, performs the task, and sends back the information without the calling program's knowledge.
Even if a program has been written using bad techniques, Libsafe will stop it from possibly being exploited. It will do this systemwide and will be transparent to the programs themselves. The main idea is to set an upper limit on the size of the buffer that is used in a particular function. Although this can't be done at compilation time, it can be done when the function is actually called. Libsafe checks the current stack and sets a realistic limit so that the buffer can't be overwritten.
Libsafe currently handles these unsafe functions:
strcpy(char *dest, const char *src)
strpcpy(char *dest, const char *src)
wcscpy(wchar_t *dest, const wchar_t *src)
wcpcpy(wchar_t *dest, const wchar_t *src)
strcat(char *dest, const char *src)
wcscat(wchar_t *dest, const wchar_t *src)
getwd(char *buf)
gets(char *s)
scanf(const char *format, ...)
realpath(char *path, char resolved_path[])
sprintf(char *str, const char *format, ...)
These are the more common ones that are problematic in C and C++ programs in a Linux environment. Most programmers should already be aware of issues coming from buffer overflows. But mistakes can and will be made, which is understandable, especially with larger programs containing a thousand or more lines of code. Libsafe provides an excellent way to safeguard against unsafe programming practices and does so with very little process latency.
2)tar xvfz libsafe-2.0-16.tgz
3)cd libsafe-2.0-16
4)make
5)make install
OVERVIEW
------------
One of the more common types of attacks directed at Linux systems comes in the form of buffer overflows, which have accounted for more than 50 percent of CERT advisories. The attacks can be quite difficult to guard against since they usually involve software flaws. The vulnerabilities reside within programs themselves and are caused when a section of memory is overwritten.
Preventing these attacks has historically involved the modification of the source code and recompilation. However, Libsafe offers another way to deal with these dangerous flaws. Libsafe is a dynamically loadable library that intercepts calls to unsafe functions and processes them so that hackers can't hijack the process and run the code of their choice. The most valuable aspect of Libsafe is that it can help you guard your Linux systems against buffer overflow vulnerabilities that have yet to be discovered.
Understanding buffer overflows
The main purpose of an attack using buffer overflow techniques is to gain access to privileged user space on a target machine. Buffer overflows can also crash a program or even cause system instability due to vulnerabilities in the software itself.
In a buffer overflow, a section of memory corresponding to a variable used by a program is overwritten. Buffer overflows have been found in all sorts of system programs and daemons such as syslogd, Sendmail, Apache, WU-FTPD, and BIND, to name but a few. Since information security has become more and more of a concern in IT, methods for avoiding, diagnosing, and documenting these exploits have improved. Yet, despite security audits and careful programming, some bugs remain present in many software programs and are not discovered until later.
Let's consider an example of a buffer overflow. When a user connects to an FTP server, the daemon displays a prompt requesting a username. We'll assume that the program is expecting a string of no more than 256 characters and is not programmed to perform an argument check. The program will work fine—that is, until a malicious individual decides to test its weaknesses by passing it a 257-character string. Now, the allocated memory space doesn’t have enough room, and the next portion up gets written over. This may not sound bad at first, but its effects can be surprising. It all depends on what is in the memory space that has been overwritten, how much of it was overwritten, and what it was overwritten with. This can be impossible to determine beforehand and can cause strange things to happen.
Buffer overflows can also be used in what are called “stack-smashing” attacks, where someone can execute his or her own code on a target system. When a program is executed, it uses an area of memory called the stack. The stack stores function arguments and local variables, among other things. If a particular variable that resides in the stack is susceptible to a buffer overflow, a hacker can use this information to gain access to the system.
Similar to buffer overflows are “format string” exploits. These also attempt to access out-of-bounds memory space to gain access to a Linux system. One way to do this is to pass special formatting characters to a print command that doesn't do any format checking. In this manner, the special characters can actually reference memory space and cause program instability. Once again, this falls into the programmers' hands and can be avoided with good coding practices. The C function sprintf(string, “%s”); is an example of good practice, while sprintf(string); would be considered unsafe, as it does not provide formatting information.
It's important to remember that a hacker needs to compromise a program that is run as root to get a root shell (which is almost always the goal of the hacker). In recent years, there has been a migration from running programs as root to using a separate user account. Many programs will create a user specifically for running program tasks and will avoid using root as much as possible. In this fashion, even if a program is compromised, the user will not have total control of the system.
A closer look at Libsafe
Libsafe is a system library that intercepts calls to specific unsafe functions and handles them securely. This allows it to handle precompiled executables, meaning that manually editing the source and recompiling (or waiting for the maintainer to do this) is not necessary. Also, and possibly more important, it will work on bugs in software programs that have not been discovered yet. It can do this because it intercepts all calls to a particular function, performs the task, and sends back the information without the calling program's knowledge.
Even if a program has been written using bad techniques, Libsafe will stop it from possibly being exploited. It will do this systemwide and will be transparent to the programs themselves. The main idea is to set an upper limit on the size of the buffer that is used in a particular function. Although this can't be done at compilation time, it can be done when the function is actually called. Libsafe checks the current stack and sets a realistic limit so that the buffer can't be overwritten.
Libsafe currently handles these unsafe functions:
strcpy(char *dest, const char *src)
strpcpy(char *dest, const char *src)
wcscpy(wchar_t *dest, const wchar_t *src)
wcpcpy(wchar_t *dest, const wchar_t *src)
strcat(char *dest, const char *src)
wcscat(wchar_t *dest, const wchar_t *src)
getwd(char *buf)
gets(char *s)
scanf(const char *format, ...)
realpath(char *path, char resolved_path[])
sprintf(char *str, const char *format, ...)
These are the more common ones that are problematic in C and C++ programs in a Linux environment. Most programmers should already be aware of issues coming from buffer overflows. But mistakes can and will be made, which is understandable, especially with larger programs containing a thousand or more lines of code. Libsafe provides an excellent way to safeguard against unsafe programming practices and does so with very little process latency.
Install Chkrootkit on a server
To install chkrootkit on a server
SSH as admin to your server.
#Change to root
su -
#Type the following
wget ftp://ftp.pangeia.com.br/pub/seg/pac/chkrootkit.tar.gz
# Check the MD5 SUM of the download for security:
ftp://ftp.pangeia.com.br/pub/seg/pac/chkrootkit.md5
md5sum chkrootkit.tar.gz
#Unpack the tarball using the command
tar xvzf chkrootkit.tar.gz
#Change to the directory it created
cd chkrootkit*
#Compile by typing
make sense
#To use chkrootkit, just type the command
./chkrootkit
#Everything it outputs should be 'not found' or 'not infected'...
Important Note: If you see 'Checking `bindshell'... INFECTED (PORTS: 465)' read on.
I'm running PortSentry/klaxon. What's wrong with the bindshell test?
If you're running PortSentry/klaxon or another program that binds itself to
unused ports probably chkrootkit will give you a false positive on the bindshell test
(ports 114/tcp, 465/tcp, 511/tcp, 1008/tcp, 1524/tcp, 1999/tcp, 3879/tcp, 4369/tcp, 5665/tcp,
10008/tcp, 12321/tcp, 23132/tcp, 27374/tcp, 29364/tcp, 31336/tcp, 31337/tcp, 45454/tcp, 47017/tcp, 47889/tcp, 60001/tcp).
#Now,
cd ..
#Then remove the .gz file
rm chkrootkit.tar.gz
Daily Automated System Scan that emails you a report
While in SSH run the following:
vi /etc/cron.daily/chkrootkit.sh
Insert the following to the new file:
#!/bin/bash
cd /yourinstallpath/chkrootkit-0.42b/
./chkrootkit | mail -s "Daily chkrootkit from Servername" admin@youremail.com
Important:
1. Replace 'yourinstallpath' with the actual path to where you unpacked Chkrootkit.
2. Change 'Servername' to the server your running so you know where it's coming from.
3. Change 'admin@youremail.com' to your actual email address where the script will mail you.
Now save the file:
Change the file permissions so we can run it
chmod 755 /etc/cron.daily/chkrootkit.sh
Now if you like you can run a test report manually in SSH to see how it looks.
cd /etc/cron.daily/
./chkrootkit.sh
You'll now receive a nice email with the report! This will now happen everyday so you don't have to run it manually.
SSH as admin to your server.
#Change to root
su -
#Type the following
wget ftp://ftp.pangeia.com.br/pub/seg/pac/chkrootkit.tar.gz
# Check the MD5 SUM of the download for security:
ftp://ftp.pangeia.com.br/pub/seg/pac/chkrootkit.md5
md5sum chkrootkit.tar.gz
#Unpack the tarball using the command
tar xvzf chkrootkit.tar.gz
#Change to the directory it created
cd chkrootkit*
#Compile by typing
make sense
#To use chkrootkit, just type the command
./chkrootkit
#Everything it outputs should be 'not found' or 'not infected'...
Important Note: If you see 'Checking `bindshell'... INFECTED (PORTS: 465)' read on.
I'm running PortSentry/klaxon. What's wrong with the bindshell test?
If you're running PortSentry/klaxon or another program that binds itself to
unused ports probably chkrootkit will give you a false positive on the bindshell test
(ports 114/tcp, 465/tcp, 511/tcp, 1008/tcp, 1524/tcp, 1999/tcp, 3879/tcp, 4369/tcp, 5665/tcp,
10008/tcp, 12321/tcp, 23132/tcp, 27374/tcp, 29364/tcp, 31336/tcp, 31337/tcp, 45454/tcp, 47017/tcp, 47889/tcp, 60001/tcp).
#Now,
cd ..
#Then remove the .gz file
rm chkrootkit.tar.gz
Daily Automated System Scan that emails you a report
While in SSH run the following:
vi /etc/cron.daily/chkrootkit.sh
Insert the following to the new file:
#!/bin/bash
cd /yourinstallpath/chkrootkit-0.42b/
./chkrootkit | mail -s "Daily chkrootkit from Servername" admin@youremail.com
Important:
1. Replace 'yourinstallpath' with the actual path to where you unpacked Chkrootkit.
2. Change 'Servername' to the server your running so you know where it's coming from.
3. Change 'admin@youremail.com' to your actual email address where the script will mail you.
Now save the file:
Change the file permissions so we can run it
chmod 755 /etc/cron.daily/chkrootkit.sh
Now if you like you can run a test report manually in SSH to see how it looks.
cd /etc/cron.daily/
./chkrootkit.sh
You'll now receive a nice email with the report! This will now happen everyday so you don't have to run it manually.
ClamAv Installation
Clam AntiVirus is an open source (GPL) anti-virus toolkit for UNIX, designed especially for e-mail scanning on mail gateways. It provides a number of utilities including a flexible and scalable multi-threaded daemon, a command line scanner and advanced tool for automatic database updates. The core of the package is an anti-virus engine available in a form of shared library
Steps
-----
groupadd clamav
useradd -c "CLAMAV Owner" -m -d /var/lib/clamav -g clamav -u 40 -s /bin/bash clamav
cd /var/lib/clamav
mkdir {bin,db,log,run,template,tmp}
chown -R clamav:clamav /var/lib/clamav
chmod 700 /var/lib/clamav
./configure --prefix=/usr \
--sysconfdir=/etc \
--libexecdir=/usr/sbin \
--disable-clamuko \
--with-user=clamav \
--with-group=clamav \
--with-dbdir=/var/lib/clamav/db
Download latest version from
http://www.clamav.net/download/sources
wget http://freshmeat.net/redir/clamav/29355/url_tgz/clamav-0.92.tar.gz
tar -xvzf clamav-0.92.tar.gz
make
make install
Configuration file
/etc/clamd.conf
Steps
-----
groupadd clamav
useradd -c "CLAMAV Owner" -m -d /var/lib/clamav -g clamav -u 40 -s /bin/bash clamav
cd /var/lib/clamav
mkdir {bin,db,log,run,template,tmp}
chown -R clamav:clamav /var/lib/clamav
chmod 700 /var/lib/clamav
./configure --prefix=/usr \
--sysconfdir=/etc \
--libexecdir=/usr/sbin \
--disable-clamuko \
--with-user=clamav \
--with-group=clamav \
--with-dbdir=/var/lib/clamav/db
Download latest version from
http://www.clamav.net/download/sources
wget http://freshmeat.net/redir/clamav/29355/url_tgz/clamav-0.92.tar.gz
tar -xvzf clamav-0.92.tar.gz
make
make install
Configuration file
/etc/clamd.conf
Install LogWatch in a Server
LogWatch
----------
From the LogWatch website: "Logwatch is a customizable log analysis system. Logwatch parses through your system's logs for a given period of time and creates a report analyzing areas that you specify, in as much detail as you require. Logwatch is easy to use and will work right out of the package on most systems."
Steps
------
1)wget ftp://fr.rpmfind.net/linux/fedora/core/3/x86_64/os/Fedora/RPMS/logwatch-5.2.2-1.noarch.rpm
2)rpm -Uvh logwatch-5.2.2-1.noarch.rpm
CONFIGURATION
--------------------
# Login as root and open the configuration file.
vi /etc/log.d/conf/logwatch.conf
OR
vi vi /usr/share/logwatch/default.conf/logwatch.conf
# Scroll down within the file and find the part called "MailTo". This is where you can specify where you want the logs mailed to. By default it is set to root. We suggest setting this to an email address you check regulary. Also, you may want to send it to an email address thats not hosted on the server (just in case ....).
--------------------------------------------------------------------------------
MailTo = logwatch@yourdomain.com, logwatch@off-site-domain.com
--------------------------------------------------------------------------------
# Now set the amount of detail you want reported by Logwatch
You will see something similar to this:
-------------------------------------------------------------------------------
# The default detail level for the report.
# This can either be Low, Med, High or a number.
# Low = 0
# Med = 5
# High = 10
Detail = Low
--------------------------------------------------------------------------------
We suggest setting the detail to High as it will send you more information. You can then take a look at everything to see if it is too much information or if it meets your need. Take some time to understand the logs. Take some time every day to monitor your logs.
----------
From the LogWatch website: "Logwatch is a customizable log analysis system. Logwatch parses through your system's logs for a given period of time and creates a report analyzing areas that you specify, in as much detail as you require. Logwatch is easy to use and will work right out of the package on most systems."
Steps
------
1)wget ftp://fr.rpmfind.net/linux/fedora/core/3/x86_64/os/Fedora/RPMS/logwatch-5.2.2-1.noarch.rpm
2)rpm -Uvh logwatch-5.2.2-1.noarch.rpm
CONFIGURATION
--------------------
# Login as root and open the configuration file.
vi /etc/log.d/conf/logwatch.conf
OR
vi vi /usr/share/logwatch/default.conf/logwatch.conf
# Scroll down within the file and find the part called "MailTo". This is where you can specify where you want the logs mailed to. By default it is set to root. We suggest setting this to an email address you check regulary. Also, you may want to send it to an email address thats not hosted on the server (just in case ....).
--------------------------------------------------------------------------------
MailTo = logwatch@yourdomain.com, logwatch@off-site-domain.com
--------------------------------------------------------------------------------
# Now set the amount of detail you want reported by Logwatch
You will see something similar to this:
-------------------------------------------------------------------------------
# The default detail level for the report.
# This can either be Low, Med, High or a number.
# Low = 0
# Med = 5
# High = 10
Detail = Low
--------------------------------------------------------------------------------
We suggest setting the detail to High as it will send you more information. You can then take a look at everything to see if it is too much information or if it meets your need. Take some time to understand the logs. Take some time every day to monitor your logs.
Installing AIDE(Advanced Intrusion Detection Environment)
AIDE(Advanced Intrusion Detection Environment)
http://www.cs.tut.fi/~rammer/aide.html
Manual
http://www.cs.tut.fi/%7Erammer/aide/manual.html
----------------------------------------------------------------
http://linsec.ca/filesystems/aide.php
=======
Compiling and Installing AIDE
Compiling AIDE is extremely straightforward. You will need the GNU versions of flex, bison, and make, and a C compiler like gcc. You may also want to build and install mhash prior to compiling AIDE to take advantage of a few more algorithms such as the haval checksum, gost checksum, and crc32 checksum. You can download mhash from mhash.sourceforge.net; some systems come with mhash or have packages readily available (via contribs, ports, etc.).
If you don't have a binary package available for AIDE, download it and compile it.
$ tar xvzf aide-0.9.tar.gz
$ cd aide-0.9
$ ./configure
$ make
# make install
You can configure some of AIDE's behaviour by modifying the =configure= call with some options. For instance, you can define a number of syslog options to use (ie. defining the facility, ident, and priority). You can also indicate whether or not AIDE should build with zlib support (which you will want if you wish AIDE to compress it's database file). Use ./configure --help to get the different configuration options you would like. You may wish to use something like:
$ ./configure --with-zlib --with-mhash --enable-mhash --with-config-file=/etc/aide.conf
This will enable zlib and mhash support, and will set the default configuration file to /etc/aide.conf. By default, AIDE installs into /usr/local.
Once you have AIDE compiled and installed, it's time to configure it.
Configuring AIDE
Before running AIDE for the first time, you will need to configure it. You should also configure it to keep an eye on the data that is important to you. System files and devices are a no-brainer, but if you have gigabytes of data in your /home directory, is it really necessary to perform such an intense integrity check on it? Depending on the data there, sure it might be. You may also want to really fine-tune AIDE to skip the bulk of the data there and simply keep an eye on the important things. Determining this before you commit to an initial database and bring the system up live would be a good thing.
Depending on how you configured AIDE, your configuration file might be /etc/aide.conf or something else. We'll assume your configuration file is /etc/aide.conf and that your database directory (where the database will be stored) is /usr/local/aide, and that the aide program itself is installed in /usr/local/bin.
Before proceeding, you may want to make a copy of your aide binary as well and copy it to a read-only medium. Having a pristine binary along with the pristine database just makes sense.
The first step is to modify your configuration file to reflect your system and your needs. Use your favourite editor to modify /etc/aide.conf. The following is an example aide.conf file:
aide.conf
@@define BINDIR /usr/local/bin
@@define CONFDIR /etc
@@define DBDIR /usr/local/aide
@@define LOGDIR /var/log
# the database
database=file:@@{DBDIR}/aide.db.gz
database_out=file:@@{DBDIR}/aide.db.new.gz
gzip_dbout=yes
# reporting options
report_url=stdout
report_url=file:@@{LOGDIR}/aide.log
verbose=20
warn_dead_symlinks=yes
# rule definitions
# the default config file lists the pre-defined rule definitions; these are
on top of those
GLOG=>
DEV=p+n+u+g
CONF=R+sha1
BIN=R+sha1
LOG=p+n+u+g
All=R+a+sha1+rmd160+tiger+crc32
# main configuration
/bin BIN
/lib BIN
/sbin BIN
/usr/bin BIN
/usr/lib BIN
/usr/sbin BIN
/usr/local/bin BIN
/usr/local/lib BIN
/usr/local/sbin BIN
!/usr/src
!/usr/local/src
/boot BIN
/boot/System.map$ BIN-m-c
/dev DEV
/etc CONF
/etc/mtab$ LOG
/var/log GLOG
@@{CONFDIR}/aide.conf$ All
@@{BINDIR}/aide$ All
@@{DBDIR}/aide.db.gz$ All
This makes for a good start. You will want to fine-tune this according to your own needs, of course, but this covers the basics.
We first define a few macros, which we can then use later in the configuration file. Then we define some database specifics, such as the location of the existing database and the database to create. We also indicate we want the database files compressed with gzip (which is only available if you build AIDE with zlib).
Next we define some reporting options. We want the report printed to standard output and the log file /var/log/aide.log. We want to increase the verbosity to 20, which is a good number (the default is 5). Finally, we want to be warned about symlinks that point to non-existant files.
The next step is to define some rules. AIDE comes with a few pre-defined rules, and a rule is made up of different checks. The available checks for AIDE are:
Default Groups
Group Check
p permissions
i inode
n number of links
u user
g group
s size
b block count
m mtime
a atime
c ctime
S check for growing size
md5 md5 checksum
sha1 sha1 checksum
rmd160 rmd160 checksum
tiger tiger checksum
haval haval checksum (only if mhash enabled)
gost gost checksum (only if mhash enabled)
crc32 crc32 checksum (only if mhash enabled)
R p+i+n+u+g+s+m+c+md5
L p+i+n+u+g
E empty group
> growing logfile (p+u+g+i+n+S)
You can easily use the default groups, however you have the ability to define your own groups if you want to add a few more checks to certain things. You can also do it to make things easier for you to understand, as we have done above. The following new groups have been defined:
GLOG=>
DEV=p+n+u+g
CONF=R+sha1
BIN=R+sha1
LOG=p+n+u+g
All=R+a+sha1+rmd160+tiger+crc32
We've created GLOG as our growing logfile group. The DEV group is suitable for devices, checking permissions, number of links, user, and group entries. The LOG group is similar. The CONF group is defined to handle configuration files, checking everything in the R group and adding the sha1 checksum. The BIN group is similar, which will be used for checking things like binaries and libraries. Finally, we define All which applies every check; great for very sensitive information.
The Checks
Finally, the checks are defined. There are a few rules to remember with this list, which allows for a lot of flexibility.
* Directories or files prefixed with ! are ignored (ie. in the above, we completely ignore /usr/src and /usr/local/src)
* Directories prefixed with = are added alone; none of its children are added. However, AIDE will first do a depth-first search
* Directories and files are always treated as a regular expression; ie. /usr/bin is identical to /usr/bin.*
* Suffixing a directory or file with $ restricts the check to that directory or file alone
This may sound a little confusing, but it becomes easier to understand. Each check has three parts: The directive (! or =), a regular expression for the directory/file to check, and the rule to apply.
For instance:
@@{CONFDIR}/aide.conf$ All
defines the aide.conf file (the regular expression @@{CONFDIR}/aide.conf$) and the rule to apply (All). There is no directive here.
!/usr/src
The directive ! is used here to tell it not to match the regular expression /usr/src (which is, remember, really /usr/src.*).
There are a few things to remember. If you wish to check something like a directory alone, and one sub-directory with contents, you may be tempted to use something like:
=/foo R
/foo/bar R
However, this is the incorrect way to specify it. Here everything under /foo will be added because your rule is =/foo.* R, so the second rule is redundant. Instead of just /foo and the contents of /foo/bar, you're getting all of /foo. To write this rule properly, you would use:
=/foo$ R
/foo/bar R
When specifying a single file to check, always suffix it with $ (ie. /path/to/file$). This will prevent someone from creating a file /path/to/file_hack and avoiding detection (remember the regular expression .* is at the end of everything and applies to files and directories).
A moderate understanding of regular expressions will really help you develop a configuration that suites your system and needs.
Initializing the Database
Once you have your configuration file configured the way you want (or the way you think you want), it's time to execute AIDE for the first time to initialize the database. Remember, your system should still not be connected to a network to make sure the database you are creating is pristine and not tampered with. To create the database, execute:
# aide --init
Depending on your configuration and the size of your filesystem, this can take a while. When the initialization is complete, move it immediately to a read-only media to ensure it remains pristine. At this point, AIDE does not encrypt the database at all, so the entire database is a plain ASCII file which would allow anyone to read or modify the file (should they obtain access to be able to do so).
The next step is to run a check against the database, just to ensure that your configuration is written the way you want it to. This is also a great opportunity to modify and fine-tune your configuration to cover exactly what you want, and to omitt what you don't want included in the database.
To run the check, execute:
# aide --check
This will check the current state of the filesystem with the database you initially created. If there are any changes, they will show up in the report. Examine the output carefully. This will indicate to you if there have been any changes to the system. Also be aware of changes you have made to the system. If you have installed new software or upgraded packages, you will see the results of those changes in the report. If you have added or removed users, or made any other system changes (ie. changes to configuration files, etc.) those changes will be reflected in the report.
Updating the Database
Once you have your system live, with the initial database stored on CD-ROM or some other removable, read-only media, you may wish to update the database. In fact, you likely will have to as you track security updates and install new software, make changes to your configuration files, etc. You do not need to create a new database with --init each time you wish to do this. Once you've made changes to your system, you may wish to do this immediately. The new database will be written to the file specified by the database_out keyword in your aide.conf file. You will want to place this new database onto read-only media immediately, and replace your old database with it.
To update the database, execute:
# aide --update
If you wish to use a different configuration file (with different rules, perhaps different output files, etc.) you can specify a non-default configuration file with the --config=[file] option to aide. You can also change the verbosity level per-run (overriding that specified in the configuration file) by using --verbose=255 (or whatever verbosity level you wish to set). This is useful if you're debugging your configuration file.
Overall, AIDE is a very useful system to have in place. It offers a very reliable means to check system state, and while it isn't an active alert system to warn you of compromise, like a Network Intrusion Detection System like snort, it will let you know passively if something has happened to warrant investigation. It may be too late to prevent the attack, but if nothing else, you'll know that the attack has occured and will be able to do something about it.
Final Notes
A few things to be aware of. It looks as though AIDE 0.9, unlike 0.8, requires a newer vsnprintf in glibc, so you will need a more recent glibc in order to build it for Linux (I'm not sure exactly what version of glibc is required, although I am running it on Mandrakelinux 9.0 which comes with glibc 2.25).
Some operating systems, like Solaris 2.6 apparently, and Mac OS X 10.2 do not have this newer vsnprintf and configure will fail on the vsnprintf check. You can bypass this check by commenting out line 2524 in the configure file (the line reads exit 1). Unfortunately, there seems to be mixed results with this judging from the mailing list archive; in some cases AIDE will compile and dump core as soon as you run it, in others it fails to compile at all, and in others it looks like it might work. What most people seem to have done is stuck with AIDE 0.8 on these systems.
Another problem I encountered was attempting to build AIDE on OS X, aside from the vsnprintf issues. Compiling mhash from source works just fine, but for some reason the configure script for AIDE refuses to see the library (whether it's installed in /usr/local/lib or in /sw/lib (where the rest of the good stuff from fink resides)). It's indicated that, in these cases, calling configure like:
$ ./configure --with-extra-libs=-L/usr/local/lib \
--with-extra-includes=-I/usr/local/include
should work, but the mhash libraries still are not picked up during the test. I'm unsure if this is a quirk in the configure script or an issue with OS X's environment.
In short, there might be some issues getting AIDE to work for you, depending upon what operating system you are using. You may find that AIDE works flawlessly for you, and you may find yourself unable to even compile it. Dependant upon your results, you may wish to take a look at Tripwire instead. However, please do keep in mind that AIDE is still pre-release software so if you do come across any problems, feel free to bring it up with the developers on the mailing list.
Resources
http://www.cs.tut.fi/~rammer/aide.html
Manual
http://www.cs.tut.fi/%7Erammer/aide/manual.html
----------------------------------------------------------------
http://linsec.ca/filesystems/aide.php
=======
Compiling and Installing AIDE
Compiling AIDE is extremely straightforward. You will need the GNU versions of flex, bison, and make, and a C compiler like gcc. You may also want to build and install mhash prior to compiling AIDE to take advantage of a few more algorithms such as the haval checksum, gost checksum, and crc32 checksum. You can download mhash from mhash.sourceforge.net; some systems come with mhash or have packages readily available (via contribs, ports, etc.).
If you don't have a binary package available for AIDE, download it and compile it.
$ tar xvzf aide-0.9.tar.gz
$ cd aide-0.9
$ ./configure
$ make
# make install
You can configure some of AIDE's behaviour by modifying the =configure= call with some options. For instance, you can define a number of syslog options to use (ie. defining the facility, ident, and priority). You can also indicate whether or not AIDE should build with zlib support (which you will want if you wish AIDE to compress it's database file). Use ./configure --help to get the different configuration options you would like. You may wish to use something like:
$ ./configure --with-zlib --with-mhash --enable-mhash --with-config-file=/etc/aide.conf
This will enable zlib and mhash support, and will set the default configuration file to /etc/aide.conf. By default, AIDE installs into /usr/local.
Once you have AIDE compiled and installed, it's time to configure it.
Configuring AIDE
Before running AIDE for the first time, you will need to configure it. You should also configure it to keep an eye on the data that is important to you. System files and devices are a no-brainer, but if you have gigabytes of data in your /home directory, is it really necessary to perform such an intense integrity check on it? Depending on the data there, sure it might be. You may also want to really fine-tune AIDE to skip the bulk of the data there and simply keep an eye on the important things. Determining this before you commit to an initial database and bring the system up live would be a good thing.
Depending on how you configured AIDE, your configuration file might be /etc/aide.conf or something else. We'll assume your configuration file is /etc/aide.conf and that your database directory (where the database will be stored) is /usr/local/aide, and that the aide program itself is installed in /usr/local/bin.
Before proceeding, you may want to make a copy of your aide binary as well and copy it to a read-only medium. Having a pristine binary along with the pristine database just makes sense.
The first step is to modify your configuration file to reflect your system and your needs. Use your favourite editor to modify /etc/aide.conf. The following is an example aide.conf file:
aide.conf
@@define BINDIR /usr/local/bin
@@define CONFDIR /etc
@@define DBDIR /usr/local/aide
@@define LOGDIR /var/log
# the database
database=file:@@{DBDIR}/aide.db.gz
database_out=file:@@{DBDIR}/aide.db.new.gz
gzip_dbout=yes
# reporting options
report_url=stdout
report_url=file:@@{LOGDIR}/aide.log
verbose=20
warn_dead_symlinks=yes
# rule definitions
# the default config file lists the pre-defined rule definitions; these are
on top of those
GLOG=>
DEV=p+n+u+g
CONF=R+sha1
BIN=R+sha1
LOG=p+n+u+g
All=R+a+sha1+rmd160+tiger+crc32
# main configuration
/bin BIN
/lib BIN
/sbin BIN
/usr/bin BIN
/usr/lib BIN
/usr/sbin BIN
/usr/local/bin BIN
/usr/local/lib BIN
/usr/local/sbin BIN
!/usr/src
!/usr/local/src
/boot BIN
/boot/System.map$ BIN-m-c
/dev DEV
/etc CONF
/etc/mtab$ LOG
/var/log GLOG
@@{CONFDIR}/aide.conf$ All
@@{BINDIR}/aide$ All
@@{DBDIR}/aide.db.gz$ All
This makes for a good start. You will want to fine-tune this according to your own needs, of course, but this covers the basics.
We first define a few macros, which we can then use later in the configuration file. Then we define some database specifics, such as the location of the existing database and the database to create. We also indicate we want the database files compressed with gzip (which is only available if you build AIDE with zlib).
Next we define some reporting options. We want the report printed to standard output and the log file /var/log/aide.log. We want to increase the verbosity to 20, which is a good number (the default is 5). Finally, we want to be warned about symlinks that point to non-existant files.
The next step is to define some rules. AIDE comes with a few pre-defined rules, and a rule is made up of different checks. The available checks for AIDE are:
Default Groups
Group Check
p permissions
i inode
n number of links
u user
g group
s size
b block count
m mtime
a atime
c ctime
S check for growing size
md5 md5 checksum
sha1 sha1 checksum
rmd160 rmd160 checksum
tiger tiger checksum
haval haval checksum (only if mhash enabled)
gost gost checksum (only if mhash enabled)
crc32 crc32 checksum (only if mhash enabled)
R p+i+n+u+g+s+m+c+md5
L p+i+n+u+g
E empty group
> growing logfile (p+u+g+i+n+S)
You can easily use the default groups, however you have the ability to define your own groups if you want to add a few more checks to certain things. You can also do it to make things easier for you to understand, as we have done above. The following new groups have been defined:
GLOG=>
DEV=p+n+u+g
CONF=R+sha1
BIN=R+sha1
LOG=p+n+u+g
All=R+a+sha1+rmd160+tiger+crc32
We've created GLOG as our growing logfile group. The DEV group is suitable for devices, checking permissions, number of links, user, and group entries. The LOG group is similar. The CONF group is defined to handle configuration files, checking everything in the R group and adding the sha1 checksum. The BIN group is similar, which will be used for checking things like binaries and libraries. Finally, we define All which applies every check; great for very sensitive information.
The Checks
Finally, the checks are defined. There are a few rules to remember with this list, which allows for a lot of flexibility.
* Directories or files prefixed with ! are ignored (ie. in the above, we completely ignore /usr/src and /usr/local/src)
* Directories prefixed with = are added alone; none of its children are added. However, AIDE will first do a depth-first search
* Directories and files are always treated as a regular expression; ie. /usr/bin is identical to /usr/bin.*
* Suffixing a directory or file with $ restricts the check to that directory or file alone
This may sound a little confusing, but it becomes easier to understand. Each check has three parts: The directive (! or =), a regular expression for the directory/file to check, and the rule to apply.
For instance:
@@{CONFDIR}/aide.conf$ All
defines the aide.conf file (the regular expression @@{CONFDIR}/aide.conf$) and the rule to apply (All). There is no directive here.
!/usr/src
The directive ! is used here to tell it not to match the regular expression /usr/src (which is, remember, really /usr/src.*).
There are a few things to remember. If you wish to check something like a directory alone, and one sub-directory with contents, you may be tempted to use something like:
=/foo R
/foo/bar R
However, this is the incorrect way to specify it. Here everything under /foo will be added because your rule is =/foo.* R, so the second rule is redundant. Instead of just /foo and the contents of /foo/bar, you're getting all of /foo. To write this rule properly, you would use:
=/foo$ R
/foo/bar R
When specifying a single file to check, always suffix it with $ (ie. /path/to/file$). This will prevent someone from creating a file /path/to/file_hack and avoiding detection (remember the regular expression .* is at the end of everything and applies to files and directories).
A moderate understanding of regular expressions will really help you develop a configuration that suites your system and needs.
Initializing the Database
Once you have your configuration file configured the way you want (or the way you think you want), it's time to execute AIDE for the first time to initialize the database. Remember, your system should still not be connected to a network to make sure the database you are creating is pristine and not tampered with. To create the database, execute:
# aide --init
Depending on your configuration and the size of your filesystem, this can take a while. When the initialization is complete, move it immediately to a read-only media to ensure it remains pristine. At this point, AIDE does not encrypt the database at all, so the entire database is a plain ASCII file which would allow anyone to read or modify the file (should they obtain access to be able to do so).
The next step is to run a check against the database, just to ensure that your configuration is written the way you want it to. This is also a great opportunity to modify and fine-tune your configuration to cover exactly what you want, and to omitt what you don't want included in the database.
To run the check, execute:
# aide --check
This will check the current state of the filesystem with the database you initially created. If there are any changes, they will show up in the report. Examine the output carefully. This will indicate to you if there have been any changes to the system. Also be aware of changes you have made to the system. If you have installed new software or upgraded packages, you will see the results of those changes in the report. If you have added or removed users, or made any other system changes (ie. changes to configuration files, etc.) those changes will be reflected in the report.
Updating the Database
Once you have your system live, with the initial database stored on CD-ROM or some other removable, read-only media, you may wish to update the database. In fact, you likely will have to as you track security updates and install new software, make changes to your configuration files, etc. You do not need to create a new database with --init each time you wish to do this. Once you've made changes to your system, you may wish to do this immediately. The new database will be written to the file specified by the database_out keyword in your aide.conf file. You will want to place this new database onto read-only media immediately, and replace your old database with it.
To update the database, execute:
# aide --update
If you wish to use a different configuration file (with different rules, perhaps different output files, etc.) you can specify a non-default configuration file with the --config=[file] option to aide. You can also change the verbosity level per-run (overriding that specified in the configuration file) by using --verbose=255 (or whatever verbosity level you wish to set). This is useful if you're debugging your configuration file.
Overall, AIDE is a very useful system to have in place. It offers a very reliable means to check system state, and while it isn't an active alert system to warn you of compromise, like a Network Intrusion Detection System like snort, it will let you know passively if something has happened to warrant investigation. It may be too late to prevent the attack, but if nothing else, you'll know that the attack has occured and will be able to do something about it.
Final Notes
A few things to be aware of. It looks as though AIDE 0.9, unlike 0.8, requires a newer vsnprintf in glibc, so you will need a more recent glibc in order to build it for Linux (I'm not sure exactly what version of glibc is required, although I am running it on Mandrakelinux 9.0 which comes with glibc 2.25).
Some operating systems, like Solaris 2.6 apparently, and Mac OS X 10.2 do not have this newer vsnprintf and configure will fail on the vsnprintf check. You can bypass this check by commenting out line 2524 in the configure file (the line reads exit 1). Unfortunately, there seems to be mixed results with this judging from the mailing list archive; in some cases AIDE will compile and dump core as soon as you run it, in others it fails to compile at all, and in others it looks like it might work. What most people seem to have done is stuck with AIDE 0.8 on these systems.
Another problem I encountered was attempting to build AIDE on OS X, aside from the vsnprintf issues. Compiling mhash from source works just fine, but for some reason the configure script for AIDE refuses to see the library (whether it's installed in /usr/local/lib or in /sw/lib (where the rest of the good stuff from fink resides)). It's indicated that, in these cases, calling configure like:
$ ./configure --with-extra-libs=-L/usr/local/lib \
--with-extra-includes=-I/usr/local/include
should work, but the mhash libraries still are not picked up during the test. I'm unsure if this is a quirk in the configure script or an issue with OS X's environment.
In short, there might be some issues getting AIDE to work for you, depending upon what operating system you are using. You may find that AIDE works flawlessly for you, and you may find yourself unable to even compile it. Dependant upon your results, you may wish to take a look at Tripwire instead. However, please do keep in mind that AIDE is still pre-release software so if you do come across any problems, feel free to bring it up with the developers on the mailing list.
Resources
Removal of insecure packages and unnecessary software from server.
Removal of insecure packages and unnecessary software from server.
Please check to see the packages that are not needed on a web server. You can use the command rpm -qa to list all the installed rpm packages on the server. From the list remove packages choose the packages that are not required.
Some common examples of unnecessary packages are given below.
mtools
yp-tools
redhat-config-nfs
redhat-config-samba
tftp-server
ypserv
redhat-config-printer-gui
samba
samba-swat
cups
gmp-devel
ElectricFence
doxygen
Xfree86-xfs
redhat-config-printer
cups-libs
samba-common
samba-client
These packages are specific to RHEL 3. It varies in different distributions.
Please check to see the packages that are not needed on a web server. You can use the command rpm -qa to list all the installed rpm packages on the server. From the list remove packages choose the packages that are not required.
Some common examples of unnecessary packages are given below.
mtools
yp-tools
redhat-config-nfs
redhat-config-samba
tftp-server
ypserv
redhat-config-printer-gui
samba
samba-swat
cups
gmp-devel
ElectricFence
doxygen
Xfree86-xfs
redhat-config-printer
cups-libs
samba-common
samba-client
These packages are specific to RHEL 3. It varies in different distributions.
Server Security steps and the quote details
Do the Following : For Server Security for Cpanel Servers
1. sysctl http://www.eth0.us/sysctl
2. noexec, nosuid /var/tmp /tmp http://www.eth0.us/tmp
3. LES Linux Environment Security http://www.securecentos.com/temp/installing-les-linux-environment-security.html
4. Removal of Insecure packages http://pcgeo.blogspot.com/2009/12/removal-of-insecure-packages-and.html
5. RPM upgrades [Yum Update]
6. Firewall (CSF + LFD) http://www.configserver.com/free/csf/install.txt
7. AIDE (Advanced Intrusion Detection Environment) http://pcgeo.blogspot.com/2009/12/installing-aideadvanced-intrusion.html
8. Logwatch Installation and configuration http://pcgeo.blogspot.com/2009/12/install-logwatch-in-server.html
9. ClamAV (virus scanner) Installation with Exim on cPanel servers http://pcgeo.blogspot.com/2009/12/clamav-installation.html
10. chkrootkit http://pcgeo.blogspot.com/2009/12/install-chkrootkit-on-server.html
11. LibSafe Installation http://pcgeo.blogspot.com/2009/12/libsafe-installation.html
more steps
* WHM -> ConfigServer Security&Firewall -> Check Server Security (You need to get atleast 105 points out of 119)
* WHM -> Update Config -> Select Manual Updates Only (STABLE tree) and run "/scripts/upcp --force"
* Run Easy apache..
You can enable below modules,
Mod SuPHP
IonCube Loader for PHP
Zend Optimizer for PHP
Bcmath, Bz2, CGI, Calendar, Curl , CurlSSL, Curlwrappers, FTP, GD , Iconv , Imap ,MM ,Magic Quotes, MailHeaders , Mbregex, Mbstring , Mcrypt , Mhash , Mime Magic, Mysql , Mysql of the system, Openssl , POSIX , Path Info Check, Pear, SafeMode , Sockets , TTF (FreeType), XmlRPC , Zip , Zlib
LAST STEP
** Deny direct root access
** change the SSH PORT (Dont forget to add the new port to CSF)
** Create a wheel user
1. sysctl http://www.eth0.us/sysctl
2. noexec, nosuid /var/tmp /tmp http://www.eth0.us/tmp
3. LES Linux Environment Security http://www.securecentos.com/temp/installing-les-linux-environment-security.html
4. Removal of Insecure packages http://pcgeo.blogspot.com/2009/12/removal-of-insecure-packages-and.html
5. RPM upgrades [Yum Update]
6. Firewall (CSF + LFD) http://www.configserver.com/free/csf/install.txt
7. AIDE (Advanced Intrusion Detection Environment) http://pcgeo.blogspot.com/2009/12/installing-aideadvanced-intrusion.html
8. Logwatch Installation and configuration http://pcgeo.blogspot.com/2009/12/install-logwatch-in-server.html
9. ClamAV (virus scanner) Installation with Exim on cPanel servers http://pcgeo.blogspot.com/2009/12/clamav-installation.html
10. chkrootkit http://pcgeo.blogspot.com/2009/12/install-chkrootkit-on-server.html
11. LibSafe Installation http://pcgeo.blogspot.com/2009/12/libsafe-installation.html
more steps
* WHM -> ConfigServer Security&Firewall -> Check Server Security (You need to get atleast 105 points out of 119)
* WHM -> Update Config -> Select Manual Updates Only (STABLE tree) and run "/scripts/upcp --force"
* Run Easy apache..
You can enable below modules,
Mod SuPHP
IonCube Loader for PHP
Zend Optimizer for PHP
Bcmath, Bz2, CGI, Calendar, Curl , CurlSSL, Curlwrappers, FTP, GD , Iconv , Imap ,MM ,Magic Quotes, MailHeaders , Mbregex, Mbstring , Mcrypt , Mhash , Mime Magic, Mysql , Mysql of the system, Openssl , POSIX , Path Info Check, Pear, SafeMode , Sockets , TTF (FreeType), XmlRPC , Zip , Zlib
LAST STEP
** Deny direct root access
** change the SSH PORT (Dont forget to add the new port to CSF)
** Create a wheel user
SSTP Configuration step by step
http://www.netcal.com/blog/?p=135
http://www.windowsecurity.com/articles/Configuring-Windows-Server-2008-Remote-Access-SSL-VPN-Server-Part2.html
http://www.windowsecurity.com/articles/Configuring-Windows-Server-2008-Remote-Access-SSL-VPN-Server-Part2.html
HTTP 500 error message displays instead of ASP error message / ASP debugging mode
To enable ASP debugging
1.
In IIS Manager, double-click the local computer, right-click the Web Sites folder or an individual Web site folder, and then click Properties.
Configuration settings made at the Web Sites level are inherited by all of the Web sites on the server. You can override inheritance by configuring the individual site or site element.
2.
Click the Home Directory tab, and then click Configuration.
3.
Click the Debugging tab, and then select the Enable ASP server-side script debugging check box.
4.
Click Send detailed ASP error messages to client if you want to send the client very detailed debugging information, or click Send the following text error message to client and type the text you want to send to the client.
5.
Click OK.
==========================================
You can also customize the 500 internal error created by the IIS using 500-100.asp file.
Note The 500-100.asp file should not be implemented on production Web sites. The 500-100.asp file may expose custom code to users.
To use the 500-100.asp file for error handling on the nondefault Web site(The website for which you wish to setup this functionality), perform the following steps:
1. Start the Internet Service Manager (ISM), which loads the Internet Information Services snap-in for the Microsoft Management Console (MMC).
2. Right-click the appropriate Web site, click New, and then click Virtual Directory.
3. In the Virtual Directory Creation Wizard, click Next. In the Alias text box, type IISHelp, and then click Next.
4. When you are prompted for the path to the content directory, click Browse, select the C:\Windows\Help\IisHelp folder, and then click Next.
5. On the Access Permissions page, accept all the defaults, click Next, and then click Finish. Give Read permission only.
6. Right-click the Web site again, and then click Properties.
7. On the Custom Errors tab, select the 500;100 error line, and then click Edit Properties.
8. In the Message Type list box, select URL, and then type /iisHelp/common/500-100.asp in the URL text box.
9. Click OK twice to return to the ISM.
1.
In IIS Manager, double-click the local computer, right-click the Web Sites folder or an individual Web site folder, and then click Properties.
Configuration settings made at the Web Sites level are inherited by all of the Web sites on the server. You can override inheritance by configuring the individual site or site element.
2.
Click the Home Directory tab, and then click Configuration.
3.
Click the Debugging tab, and then select the Enable ASP server-side script debugging check box.
4.
Click Send detailed ASP error messages to client if you want to send the client very detailed debugging information, or click Send the following text error message to client and type the text you want to send to the client.
5.
Click OK.
==========================================
You can also customize the 500 internal error created by the IIS using 500-100.asp file.
Note The 500-100.asp file should not be implemented on production Web sites. The 500-100.asp file may expose custom code to users.
To use the 500-100.asp file for error handling on the nondefault Web site(The website for which you wish to setup this functionality), perform the following steps:
1. Start the Internet Service Manager (ISM), which loads the Internet Information Services snap-in for the Microsoft Management Console (MMC).
2. Right-click the appropriate Web site, click New, and then click Virtual Directory.
3. In the Virtual Directory Creation Wizard, click Next. In the Alias text box, type IISHelp, and then click Next.
4. When you are prompted for the path to the content directory, click Browse, select the C:\Windows\Help\IisHelp folder, and then click Next.
5. On the Access Permissions page, accept all the defaults, click Next, and then click Finish. Give Read permission only.
6. Right-click the Web site again, and then click Properties.
7. On the Custom Errors tab, select the 500;100 error line, and then click Edit Properties.
8. In the Message Type list box, select URL, and then type /iisHelp/common/500-100.asp in the URL text box.
9. Click OK twice to return to the ISM.
Windows Security Check/Auditing
To secure the windows servers, please take the following actions in our servers.
1) Go through the event viewer logs of the server to check any hack incidents. From the event viewer you can obtain information regarding the hack attempts from the various IPs. If any such incidents are noted, you can block access to such IPs by writing the required firewall rule in
Start >> Programs >> Administrative Tools >> Local Security Policy >> IP Security Policies on Local Computer .
You can ban or accept an IP/Host by writing the required rule.
2) Install the antirootkit softwares in our windows servers. You can install the antirootkit softwares like
a) RootkitRevealer - Provided by Microsoft. It is required to be installed by DC end itself, since the installation can be done only from the physical location of the server. Installation cannot be donw via the terminal services.
b) Malicious Software Removal Tool Kit - Provided by Microsoft. It is available with all Microsoft OSes. To use, type the command mrt in run.
Start >> Run >> mrt
c) Install a good antirootkit software.
Free Software :
-----------------
Sophos antirootkit
Paid Software
----------------
RootKit Buster - Trend Micro
Refer : http://www.antirootkit.com/software/index.htm
3) Install the Nessus Network Security Scanner. Use the version 3. It is free while the nessus4 is a paid software. You can download the following software from the following link.
http://www.nessus.org/download/nessus_download.php
Select Nessus 3.2.1.1.exe
http://downloads.nessus.org/nessus3dl.php?file=Nessus-3.2.1.1.exe&licence_accept=yes&t=00e6d5dee038bea390ddcc3f5fdf197f
After the install create a user named 'localuser'. To create the localuser
Start >> Programs >> Tenable >> Nessus >> Manage Users
Once the user is created, take the nessus client.
In nessus client add a new network by clicking the '+' button. Name the network as 'localhost'.
then click the button 'connect'. You will get a pop up window. Click edit in popup window. Add the user 'localuser' and its password. Then proceed.
Select the 'default policy' in the succeeding window. then select 'Scan now'. You will get a detailed report about the various vulnerabilities if anything present.
4) Next aspect of security auditing in the windows server is to find the anonymous users/hack users etc. So we need to remove them from the registry. Be careful when you edit the windows registry keys. Careless editing of the windows registry keys may damage/corrupt the windows OS. So make sure to take a copy/backup of the windows registry, before touhing it.
To access the windows registry.
Start >> run >> regedit
In regedit, take
My computer >> HKEY_LOCAL_MACHINE >> SOFTWARE >> Microsoft >> Windows NT >> Current Version >> ProfileList
At this location, you can see various profiles. Check for the hacker profile here. If you are finding the hacker profile, say;support, remove that key from the registry. Note down the image path. before making ANY CHANGE in keys.
Before deleting the user from the registry profile, have a look at the Computer Management.
Start >> Programs >> Administrative Tools >> Local Users and Groups >> Users.
Here you should double check the hacker profile is existing or not. If it is existing, check the permissions assigned to it. Remove the administrator/Full privilieges if any. Also upon checking the image path, you willl get an idea about the directories to which the hacker user have the access. Also check the permissions assigned to the folder. If you are finding the hacker user have the access to that folder, remove that user from the permission list. Then remove the hacker user from the Start >> Programs >> Administrative Tools >> Local Users and Groups >> Users.
Also make sure to remove the profile from the registry. Keep an eye at the server and keep on checking the logs for any other hack attempts.
5) Install a good antivirus software in the server. Always prefer only the paid ones like Karspersky,Trend Micro, Avira etc. If the customer is asking for free one itself, you can go for free anti virus softwares like AVG Free Edition, Panda Free Edition. It is recommeded to the firewall software provided by the anti virus software, since it may block access the web users.
6) Make sure to provide only the required premissions to the users. Only the Administrator user should be given with the full privilege. Other users should not be given with full privilege/write/execute privileges.
7) If you are finding any server software is in a degraded status, please make sure to upgrade them to their latest versions. Apply windows updates regularly to ensure maximum security. You can obtain windows updates/patches from the technet,microsoft sites. Make sure that server is applied with the latest service pack available.
8) Reset the server/software/account passwords regularly(once in a week/month). Use only complex passwords having a mix of letters,symbols and extra characters etc. Do not use easy to remember passwords like password 123 etc.
1) Go through the event viewer logs of the server to check any hack incidents. From the event viewer you can obtain information regarding the hack attempts from the various IPs. If any such incidents are noted, you can block access to such IPs by writing the required firewall rule in
Start >> Programs >> Administrative Tools >> Local Security Policy >> IP Security Policies on Local Computer .
You can ban or accept an IP/Host by writing the required rule.
2) Install the antirootkit softwares in our windows servers. You can install the antirootkit softwares like
a) RootkitRevealer - Provided by Microsoft. It is required to be installed by DC end itself, since the installation can be done only from the physical location of the server. Installation cannot be donw via the terminal services.
b) Malicious Software Removal Tool Kit - Provided by Microsoft. It is available with all Microsoft OSes. To use, type the command mrt in run.
Start >> Run >> mrt
c) Install a good antirootkit software.
Free Software :
-----------------
Sophos antirootkit
Paid Software
----------------
RootKit Buster - Trend Micro
Refer : http://www.antirootkit.com/software/index.htm
3) Install the Nessus Network Security Scanner. Use the version 3. It is free while the nessus4 is a paid software. You can download the following software from the following link.
http://www.nessus.org/download/nessus_download.php
Select Nessus 3.2.1.1.exe
http://downloads.nessus.org/nessus3dl.php?file=Nessus-3.2.1.1.exe&licence_accept=yes&t=00e6d5dee038bea390ddcc3f5fdf197f
After the install create a user named 'localuser'. To create the localuser
Start >> Programs >> Tenable >> Nessus >> Manage Users
Once the user is created, take the nessus client.
In nessus client add a new network by clicking the '+' button. Name the network as 'localhost'.
then click the button 'connect'. You will get a pop up window. Click edit in popup window. Add the user 'localuser' and its password. Then proceed.
Select the 'default policy' in the succeeding window. then select 'Scan now'. You will get a detailed report about the various vulnerabilities if anything present.
4) Next aspect of security auditing in the windows server is to find the anonymous users/hack users etc. So we need to remove them from the registry. Be careful when you edit the windows registry keys. Careless editing of the windows registry keys may damage/corrupt the windows OS. So make sure to take a copy/backup of the windows registry, before touhing it.
To access the windows registry.
Start >> run >> regedit
In regedit, take
My computer >> HKEY_LOCAL_MACHINE >> SOFTWARE >> Microsoft >> Windows NT >> Current Version >> ProfileList
At this location, you can see various profiles. Check for the hacker profile here. If you are finding the hacker profile, say;support, remove that key from the registry. Note down the image path. before making ANY CHANGE in keys.
Before deleting the user from the registry profile, have a look at the Computer Management.
Start >> Programs >> Administrative Tools >> Local Users and Groups >> Users.
Here you should double check the hacker profile is existing or not. If it is existing, check the permissions assigned to it. Remove the administrator/Full privilieges if any. Also upon checking the image path, you willl get an idea about the directories to which the hacker user have the access. Also check the permissions assigned to the folder. If you are finding the hacker user have the access to that folder, remove that user from the permission list. Then remove the hacker user from the Start >> Programs >> Administrative Tools >> Local Users and Groups >> Users.
Also make sure to remove the profile from the registry. Keep an eye at the server and keep on checking the logs for any other hack attempts.
5) Install a good antivirus software in the server. Always prefer only the paid ones like Karspersky,Trend Micro, Avira etc. If the customer is asking for free one itself, you can go for free anti virus softwares like AVG Free Edition, Panda Free Edition. It is recommeded to the firewall software provided by the anti virus software, since it may block access the web users.
6) Make sure to provide only the required premissions to the users. Only the Administrator user should be given with the full privilege. Other users should not be given with full privilege/write/execute privileges.
7) If you are finding any server software is in a degraded status, please make sure to upgrade them to their latest versions. Apply windows updates regularly to ensure maximum security. You can obtain windows updates/patches from the technet,microsoft sites. Make sure that server is applied with the latest service pack available.
8) Reset the server/software/account passwords regularly(once in a week/month). Use only complex passwords having a mix of letters,symbols and extra characters etc. Do not use easy to remember passwords like password 123 etc.
Few linux Tips.. Its really helpful to you.
Please go through the following tips in linux. I am sure that it will helpful to you guys.
1.) Flush DNS cache in Linux
You registered a domain and you can’t access it, it may be your DNS cache that holds the problem. Here’s how to flush DNS cache in Linux
/etc/rc.d/init.d/nscd restart
2.) Find files that are older than X years using Linux
If you want to find out what files are older than - let’s say - ten years and still residing on your hard drive, do a
cd /
find . -mtime +3650
This will search for such files on all your Linux partition and display them in verbose mode at the end of the search. In this case, 3650 stands for the number of days to go back.
Limit the CPU usage of a certain application in Linux
You can do this by installing cpulimit. You can limit a certain running application either by name or by process ID:
cpulimit -e firefox -l 30
This won’t let Firefox go beyond a 30% CPU usage limit.
Limit the cpu usage by process ID
cpulimit -p 3493 -l 40
This will limit process number 3493 to 40% CPU consumption.
What uses your resources?
By using top you can find out what processes are using your resources and in what amount. Another way to do this is by executing the following command:
ps -eo pcpu,pid,user,args | sort -r -k1 | more
This will output something like
6.5 6077 user pidgin
6.5 5535 root /usr/X11R6/bin/X :0 -br -audit 0 -auth /var/lib/gdm/:0.Xauth -nolisten tcp vt7
6.1 7027 user transmission
5.7 6563 user /usr/lib/firefox-3.0.5/firefox
1.2 9 root [events/0]
0.7 6070 user nautilus --no-desktop --browser
0.4 29888 user gedit
0.2 6555 user /usr/lib/thunderbird/thunderbird-bin
…nicely formatted and structured.
Make the lights of your NIC blink
If you work on a large network and you get sent to the server room to check out a certain network card, you might get lost in the multitude of network hardware. To find your certain NIC, SSH to the machine in question and do a
sudo ethtool -o eth0
The lights of the network card should start blinking repeatedly.
1.) Flush DNS cache in Linux
You registered a domain and you can’t access it, it may be your DNS cache that holds the problem. Here’s how to flush DNS cache in Linux
/etc/rc.d/init.d/nscd restart
2.) Find files that are older than X years using Linux
If you want to find out what files are older than - let’s say - ten years and still residing on your hard drive, do a
cd /
find . -mtime +3650
This will search for such files on all your Linux partition and display them in verbose mode at the end of the search. In this case, 3650 stands for the number of days to go back.
Limit the CPU usage of a certain application in Linux
You can do this by installing cpulimit. You can limit a certain running application either by name or by process ID:
cpulimit -e firefox -l 30
This won’t let Firefox go beyond a 30% CPU usage limit.
Limit the cpu usage by process ID
cpulimit -p 3493 -l 40
This will limit process number 3493 to 40% CPU consumption.
What uses your resources?
By using top you can find out what processes are using your resources and in what amount. Another way to do this is by executing the following command:
ps -eo pcpu,pid,user,args | sort -r -k1 | more
This will output something like
6.5 6077 user pidgin
6.5 5535 root /usr/X11R6/bin/X :0 -br -audit 0 -auth /var/lib/gdm/:0.Xauth -nolisten tcp vt7
6.1 7027 user transmission
5.7 6563 user /usr/lib/firefox-3.0.5/firefox
1.2 9 root [events/0]
0.7 6070 user nautilus --no-desktop --browser
0.4 29888 user gedit
0.2 6555 user /usr/lib/thunderbird/thunderbird-bin
…nicely formatted and structured.
Make the lights of your NIC blink
If you work on a large network and you get sent to the server room to check out a certain network card, you might get lost in the multitude of network hardware. To find your certain NIC, SSH to the machine in question and do a
sudo ethtool -o eth0
The lights of the network card should start blinking repeatedly.
Generate a Certificate Signing Request (CSR) for an SSL Certificate from RapidSSL.com
Refer http://www.rapidssl.com/ssl-certificate-support/generate-csr/apache_mod_ssl.htm
[cPanel smartcheck] Possible Hard Drive Failure Soon
Please refer below urls:
https://jonesolutions.com/support/index.php?x=&mod_id=2&root=23&id=164
http://forums.cpanel.net/f5/possible-hard-drive-failure-soon-113997.html
https://jonesolutions.com/support/index.php?x=&mod_id=2&root=23&id=164
http://forums.cpanel.net/f5/possible-hard-drive-failure-soon-113997.html
Configuring CSF on VPS
CSF requires atleast these iptables modules:
ip_tables
ipt_state
ipt_multiport
iptable_filter
ipt_limit
ipt_LOG
ipt_REJECT
ipt_conntrack
ip_conntrack
ip_conntrack_ftp
iptable_mangle
If any one of these missing, CSF will not function with all its features enabled, even though it shows as running. So please follow this URL if you face any issue related to CSF in VPS nodes.
http://www.sherin.co.in/csf-lfd-firewall-configuration-in-vps-virtuozzo-openvz/
ip_tables
ipt_state
ipt_multiport
iptable_filter
ipt_limit
ipt_LOG
ipt_REJECT
ipt_conntrack
ip_conntrack
ip_conntrack_ftp
iptable_mangle
If any one of these missing, CSF will not function with all its features enabled, even though it shows as running. So please follow this URL if you face any issue related to CSF in VPS nodes.
http://www.sherin.co.in/csf-lfd-firewall-configuration-in-vps-virtuozzo-openvz/
Some useful Links and Commands
Some useful links
* Migrate cpanel - cpanel check list :: http://forums.cpanel.net/482369-post1.html
* Easy apache errors and its solutions :: http://twiki.cpanel.net/twiki/bin/view/AllDocumentation/EaError
Some useful Commands
* to scan public_html using clamscan
# for i in /home/*/www ; do clamscan -ri -l /some/log "$i" ; done
* to clear the arp cahce issue the command
# arping -I eth1 IP_address
* To find out the logs in a specific time
# sed -n '/Jul 14 13:/,/Jul 14 15:/p' /var/log/messages (Time between 1 PM - 3 PM)
* If we wish to enable an ip to access webmin then we need to edit the following file
# /etc/webmin/miniserv.users and add your IP's !!
* Migrate cpanel - cpanel check list :: http://forums.cpanel.net/482369-post1.html
* Easy apache errors and its solutions :: http://twiki.cpanel.net/twiki/bin/view/AllDocumentation/EaError
Some useful Commands
* to scan public_html using clamscan
# for i in /home/*/www ; do clamscan -ri -l /some/log "$i" ; done
* to clear the arp cahce issue the command
# arping -I eth1 IP_address
* To find out the logs in a specific time
# sed -n '/Jul 14 13:/,/Jul 14 15:/p' /var/log/messages (Time between 1 PM - 3 PM)
* If we wish to enable an ip to access webmin then we need to edit the following file
# /etc/webmin/miniserv.users and add your IP's !!
Hspehere: unable to add a new email account
f you are unable to add a new email account from hsphere. Try the following command in oceant04 and then add it from the control panel
#cd /hsphere/shared/scripts/
[root@cp scripts]# ./mbox-del email@yourdomain.com
#cd /hsphere/shared/scripts/
[root@cp scripts]# ./mbox-del email@yourdomain.com
Console from Linux
We usually come across the error "terminal server has exceeded max number of allowed connections". We try using the /console switch along with mstsc to get rid of this error. But in some cases, this might not work. Try the following from a linux machine and you should be able to connect without any issue.
rdesktop -0 1.2.3.4
rdesktop -0 1.2.3.4
Block IP on a windows server
Here is a simple tool which you can use to block ips or ip ranges to a windows IIS. You can also block ips from certain predefined countries like Chine or Korea. Simply download, extract and run the exe. Choose the website and the ip or the range to be blocked.
http://www.hdgreetings.com/other/Block-IP-IIS/
http://www.hdgreetings.com/other/Block-IP-IIS/
Subscribe to:
Posts (Atom)