Linux troubleshooting tools

An app is running “slow” on Linux, how do you go about figuring out what could be wrong? Typical bottle-necks for a slow application are cpu, disk i/o, network, memory, or database. For instance if Apache is running slow it could be memory starved. Some of the tools we can use to debug application are listed below. Keep in mind that when using these tools, you generally want to take multiple snapshots in time, and not necessarily base your information on a single moment in time.

1) Vmstat shows you a number of interesting statistics. Some things to look for in vmstat include the proc ‘r’ column, which shows the number of processes waiting for run time. If this number if “high” it indicates CPU starvation. CPU starved apps run slow. To fix, increase the number of CPUs in the system. Another field to look at in the output of vmstat is the ‘cs’ field under the system heading. ‘cs’ stands for context switching. A “high” number means that the CPU is having to switch between a lot of tasks. These tasks might be network, disk, or cpu related. One way to solve this problem is to see if the server is doing “too many” things, i.e., running too many applications. Reduce the number of applications running to solve this problem.

$ vmstat -s
     16301872  total memory
      5718716  used memory
      3179672  active memory
      2228164  inactive memory
     10583156  free memory
       166264  buffer memory
      2913548  swap cache
      8191992  total swap
          572  used swap
      8191420  free swap
     26013290 non-nice user cpu ticks
          881 nice user cpu ticks
     17260265 system cpu ticks
   7326487040 idle cpu ticks
     13013070 IO-wait cpu ticks
           72 IRQ cpu ticks
         8054 softirq cpu ticks
            0 stolen cpu ticks
     11212676 pages paged in
    180145305 pages paged out
           44 pages swapped in
          145 pages swapped out
   4176361125 interrupts
   1524882692 CPU context switches
   1367818576 boot time
      9070041 forks

2) mpstat: To view processor utilization based on per core.

$ mpstat -P ALL
Linux 2.6.32-358.6.1.el6.x86_64 (hostname) 	08/20/2013 	_x86_64_	(8 CPU)

10:48:05 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle
10:48:05 PM  all    0.03    0.00    0.23    0.18    0.00    0.00    0.00    0.32   99.24
10:48:05 PM    0    0.05    0.00    0.42    0.72    0.00    0.00    0.00    0.69   98.13
10:48:05 PM    1    0.05    0.00    1.01    0.34    0.00    0.00    0.00    1.78   96.82
10:48:05 PM    2    0.04    0.00    0.11    0.04    0.00    0.00    0.00    0.03   99.78
10:48:05 PM    3    0.03    0.00    0.11    0.18    0.00    0.00    0.00    0.03   99.66
10:48:05 PM    4    0.02    0.00    0.07    0.01    0.00    0.00    0.00    0.02   99.87
10:48:05 PM    5    0.02    0.00    0.08    0.12    0.00    0.00    0.00    0.02   99.77
10:48:05 PM    6    0.01    0.00    0.04    0.00    0.00    0.00    0.00    0.01   99.93
10:48:05 PM    7    0.01    0.00    0.04    0.01    0.00    0.00    0.00    0.01   99.93

3) iostat : If you need help in figuring out how your disk subsystem is performing, use iostat. “The iostat command generates three types of reports, the CPU Utilization report, the Device Utilization report and the Network Filesystem report.” (From the man page of iostat) Items of concern here would be iowait%, which if high indicates that the CPU is waiting for disk I/O to complete. If %util hits 100% then device saturation occurs.

4) netstat: This is an extremely useful command to diagnose numerous network issues.
To view routing table use:

$ netstat -r
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
216.218.223.240 *               255.255.255.240 U         0 0          0 br0
216.218.223.240 *               255.255.255.240 U         0 0          0 em1
link-local      *               255.255.0.0     U         0 0          0 br0
default         uplink241.gigo. 0.0.0.0         UG        0 0          0 br0

If you are having network errors and you want to see interface statistics, use netstat -i:

$ netstat -i
Kernel Interface table
Iface       MTU Met    RX-OK RX-ERR RX-DRP RX-OVR    TX-OK TX-ERR TX-DRP TX-OVR Flg
br0        1500   0 44804569      0      0      0  3708480      0      0      0 BMRU
em1        1500   0 49533635      0      0      0  7760855      0      0      0 BMRU
em1:1      1500   0      - no statistics available -                            BMRU
lo        16436   0    32131      0      0      0    32131      0      0      0 LRU
vnet0      1500   0        4      0      0      0      225      0      0      1 BMRU
vnet1      1500   0  3761563      0      0      0 45308606      0      0      0 BMRU

To view TCP/UDP statistics run:

$netstat -s
(snip)
Tcp:
    510 active connections openings
    170202 passive connection openings
    1233 failed connection attempts
    114 connection resets received
    1 connections established
    3038753 segments received
    3310750 segments send out
    11236 segments retransmited
    0 bad segments received.
    469 resets sent
(snip)
Udp:
    304134 packets received
    37 packets to unknown port received.
    0 packet receive errors
    304186 packets sent
(snip)

5) top : All time favorite for most engineers. Display a number of statistics, relating to CPU, memory and processes. Extremely useful to see which processes are taking up most resources on the system.

6) traceroute: Traceroute sends 3 UDP packets at a time to an invalid port address on the destination host. The TTL is set to 1 initially, as soon as the packets reach the first hop or router, the router will responds with an ICMP Time Exceeded message. Traceroute then sets the TTL to 2 and sends the packet again. Hop 2 will then respond with TTL exceeded, and traceroute will then sedn 3 packets with TTL of 3. Traceroute keeps doing this until it reaches the destination. Using traceroute we can figure out the network path to a destination. Also, if ICMP is blocked by some routers we can use TCP with the -T flag. In the output of traceroute if you see ‘*’ it means that router is not returning an ICMP message.

$ traceroute www.google.com
traceroute to www.google.com (74.125.239.147), 30 hops max, 60 byte packets
 1  66.220.4.225 (66.220.4.225)  8.269 ms  8.142 ms  8.581 ms
 2  10gigabitethernet1-1.core1.pao1.he.net (184.105.213.66)  8.382 ms  8.410 ms  2.025 ms
 3  184.105.224.254 (184.105.224.254)  12.300 ms  12.341 ms  12.355 ms
 4  209.85.240.114 (209.85.240.114)  1.242 ms  1.256 ms  1.257 ms
 5  66.249.95.31 (66.249.95.31)  12.384 ms  12.442 ms  12.402 ms
 6  nuq05s02-in-f19.1e100.net (74.125.239.147)  12.339 ms  11.187 ms  11.206 ms

7) ping: Ping uses ICMP ECHO_REQUEST to measure the time it takes to reach a destination. ICMP stands for internet control messaging protocol. Ping is very useful to see if a host is alive. You can ping the broadcast address of a network to see how many network devices are on the network. Ping is also used to measure network latency. Round-trip times and packet loss statistics are computed. If you see

		$ ping www.google.com
		PING www.google.com (74.125.239.144) 56(84) bytes of data.
		64 bytes from nuq05s02-in-f16.1e100.net (74.125.239.144): icmp_seq=1 ttl=59 time=1.18 ms
		64 bytes from nuq05s02-in-f16.1e100.net (74.125.239.144): icmp_seq=2 ttl=59 time=1.23 ms
		64 bytes from nuq05s02-in-f16.1e100.net (74.125.239.144): icmp_seq=3 ttl=59 time=1.20 ms
		64 bytes from nuq05s02-in-f16.1e100.net (74.125.239.144): icmp_seq=4 ttl=59 time=1.19 ms
		

8) Valgrind: This is normally not installed on CentOS box, you can do the installation using ‘sudo yum install valgrind -y’. Valgrind is good at figuring out memory leaks. I often use this program when I find a system is running out of memory on a reoccurring basis. To use valgrind, run ‘valgrind —leak-check=yes yourprogram’. Under leak summary you will see definitely lost with ‘X’ number of bytes if there is a memory leak. You will also see line numbers of code if you had compiled the code with -g option to gcc. Once you figure out if the program has memory leaks based on the output of valgrind, next step is to fix the memory leak with the help of a developer. If that is not possible, as a temporary stop gap measure you can restart the application on a period basis if the application is able to tolerate restarts without interrupting end users. For instance if you are using ‘Jetty’ and the Java app has a memory leak, restart Jetty will give you the memory back temporarily. Of course you have to ensure that there are other severs running Jetty which can handle the load while this particular server is restarted. (See http://valgrind.org for additional information)

9) oprofile: Oprofile is a system application profiler, very useful to figure out what an application is doing.

10) tcpdump: The most reliable way for snooping network traffic. In the below example I am using tcpdump to filter traffic with destination http://www.google.com. I then open a telnet session to http://www.google.com on port 80. As you can see from the tcpdump output below a 3 way TCP handshake is visible.

$ telnet www.google.com 80
Trying 74.125.239.116...
Connected to www.google.com.
Escape character is '^]'.
^]quit

telnet> quit
Connection closed.

# tcpdump host www.google.com
reading from file file, link-type EN10MB (Ethernet)
23:29:36.745130 IP myhost.com.54378 > nuq05s01-in-f16.1e100.net.http: Flags [S], seq 814284659, win 14600, options [mss 1460,sackOK,TS val 657765386 ecr 0,nop,wscale 7], length 0
23:29:36.746269 IP nuq05s01-in-f16.1e100.net.http > myhost.com.54378: Flags [S.], seq 217057752, ack 814284660, win 62392, options [mss 1430,sackOK,TS val 3024867195 ecr 657765386,nop,wscale 6], length 0
23:29:36.746293 IP myhost.com.54378 > nuq05s01-in-f16.1e100.net.http: Flags [.], ack 1, win 115, options [nop,nop,TS val 657765387 ecr 3024867195], length 0
23:29:39.736476 IP myhost.com.54378 > nuq05s01-in-f16.1e100.net.http: Flags [F.], seq 1, ack 1, win 115, options [nop,nop,TS val 657768378 ecr 3024867195], length 0
23:29:39.737655 IP nuq05s01-in-f16.1e100.net.http > myhost.com.54378: Flags [F.], seq 1, ack 2, win 975, options [nop,nop,TS val 3024870187 ecr 657768378], length 0
23:29:39.737673 IP myhost.com.54378 > nuq05s01-in-f16.1e100.net.http: Flags [.], ack 2, win 115, options [nop,nop,TS val 657768379 ecr 3024870187], length 0

11) perf: Perf collects performance counters for Linux.

12) tuned : RHEL has many profiles available that can be applied to a system. These profile include certain modifications to kernel/disk/network parameter which are suitable for the workload on a system. Tuned allows you to set preset profiles, such as virtual-host, virtual-guest, enterprise-storage, throughput-performance.

13) sar: sar collects and prints system activity report. This includes network, cpu, disk and memory.

Understanding proc filesystem

Understanding the /proc filesystem.

Proc is a pseudo filesystem that is generally mounted as /proc. It provides an interface into the kernel data structures. Proc contains a directory of each of the process-id’s running on the system. Inside each of the process-id directory you can find out additional information about the process. For instance if you need to know about all the file descriptors which a process has open, you can ls /proc//fd . For instance, let’s say you look at rsyslog:

$ pgrep rsyslog
1405
$ ls -l /proc/1405/fd
total 0
lrwx------ 1 root root 64 May  6 00:20 0 -> socket:[11792]
l-wx------ 1 root root 64 May  6 00:20 1 -> /var/log/messages
l-wx------ 1 root root 64 May  6 00:20 2 -> /var/log/cron
lr-x------ 1 root root 64 May  6 00:20 3 -> /proc/kmsg
l-wx------ 1 root root 64 May  6 00:20 4 -> /var/log/secure

Rsyslog is running with process-id 1405, when I do an ‘ls -l /proc/1405/fd’ I can see that rsyslog has /var/log/messages open, which makes sense since rsyslog writes messages to /var/log/messages.

If you want to know which environment variables a process is running with you can ‘ (cat /proc//environ; echo) | tr ’00’ ‘\n’. In the above example, let’s say I want to see which environment variables rsyslog started with:

$  (cat /proc/1405/environ; echo) | tr '\000' '\n'
TERM=linux
PATH=/sbin:/usr/sbin:/bin:/usr/bin
runlevel=3
RUNLEVEL=3
LANGSH_SOURCED=1
PWD=/
LANG=en_US.UTF-8
previous=N
PREVLEVEL=N
CONSOLETYPE=serial
SHLVL=3
UPSTART_INSTANCE=
UPSTART_EVENTS=runlevel
UPSTART_JOB=rc
_=/sbin/rsyslogd

If you want to know the path of the binary that was executed try the below:

$ ls -l /proc/1405/exe
lrwxrwxrwx 1 root root 0 May  5 22:36 /proc/1405/exe -> /sbin/rsyslogd

Each process has certain limits that are generally defined in /etc/security/limits.conf. Some of these can be viewed in /proc//limits file. In the rsyslog example here is the output of the file:

[root@hv1 proc]# cat 1405/limits 
Limit                     Soft Limit           Hard Limit           Units     
Max cpu time              unlimited            unlimited            seconds   
Max file size             unlimited            unlimited            bytes     
Max data size             unlimited            unlimited            bytes     
Max stack size            10485760             unlimited            bytes     
Max core file size        0                    unlimited            bytes     
Max resident set          unlimited            unlimited            bytes     
Max processes             127212               127212               processes 
Max open files            1024                 4096                 files     
Max locked memory         65536                65536                bytes     
Max address space         unlimited            unlimited            bytes     
Max file locks            unlimited            unlimited            locks     
Max pending signals       127212               127212               signals   
Max msgqueue size         819200               819200               bytes     
Max nice priority         0                    0                    
Max realtime priority     0                    0                    
Max realtime timeout      unlimited            unlimited            us        

What happens when you type in ‘www.cnn.com’ in your browser?

What happens when you type in http://www.cnn.com in your browser?

The communication between your client and a web server can be divided into the following components.
Assumptions: You are on a Linux client and trying to connect from your Comcast home cable connection.

– DNS
– Network communication
– HTTP

DNS: A DNS lookup resolves http://www.cnn.com. At first the Linux client will look in it’s name service cache daemon to see if this request has been cached from before. If not, then the request has to be sent to the name server. This happens when the dns request is sent to the name server specified in your /etc/resolv.conf on the client. The DNS resolver is usually your local router if you are at home. The router will forward all DNS requests to the resolver specified in it’s configuration, which it may have gotten from Comcast. The Comcast DNS server will forward this request to the .com root name servers. These will look up the NS of cnn.com. CNN.com name servers are specified when the domain is registered with the registrar. The request is then sent to to the CNN.com name servers which will look up http://www.cnn.com in their zone files and reply with the request. There is a forward lookup and a reverse lookup zone file. The forward lookup matches hostname to IP address. There are two kinds of requests that are made, one is a recursive and the other is a non recursive. A recursive query is when the client requests an answer which the DNS server has to find out and return. A non-recursive query is when the DNS server does not return the end answer, it’s upto to the client to find that out by talking to the next resolver in line. Keep in mind that each DNS server may have it’s own cache, so at any point in time the reply may come from cache if an answer is available. Caching has it’s own intricacies, and I will cover that in a DNS related blog.

Network communication: Network communication is when a 3 way TCP handshake happens between the client and the server, that includes a syn, sync-ack, and then an ack. Once the client has the IP address from the above DNS it makes a connection to the server. The client in our case will use it’s routing table to see if there is an entry for CNN.com’s network, if there is no entry, then the client will send the request for connection or sync packet to the default gateway. The default gateway does the same thing, it tries to find an entry for the the destination network, and when it cannot find it the packet is sent to the default gateway. This happens until the packet reaches the Internet gateway which is running BGP. The BGP routing table contains the list of all public IP address that have been assigned by ISP’s. This will contain CNN.com’s public IP address and the way to get to it. Comcast will then set the destination address of the packet to belong to CNN.com and send it across the Internet. Once it reaches http://www.yahoo.com, the destination will acknowledge the SYNC packet with an ACK and also send it’s own sync. This is the 2nd part of the 3 way TCP handshake. (Sync-Ack). The same routing happens on the way back at which point the client send an ACK and the connection is established. One important question to ask here is that how does the client know which host is in it’s network and which host is not? The answer is based on the netmask. For instance is the client IP is 10.1.1.100 and the netmask is 255.255.255.0, the client knows that the range 10.1.1.[0-256] is it’s local area subnet, and the rest is outside of the local area subnet. The client uses ARP or address resolution protocol to figure out which MAC address to send the packet to at layer 2. If the default gateway is 10.1.1.1, the client will send an ARP request saying ‘Who has 10.1.1.1?’. This ARP request is heard by the router which will then respond with it’s MAC address and the client will encapsulate the packet with the MAC address of the router.

HTTP: Once the destination receives the packet, let’s say http://www.cnn.com is running behind a http load balancer, in which case the load balancer can be using in-line or DSR. In-line means that the load balancer will handle all incoming and outgoing connection between the client and the http server. DSR or direct server return means that the incoming connections many come through the load balancer, but the outgoing connections will be between the web server and the client. Further details of this will be in my load balancer blog. Apache when it receives the incoming request on port 80 will then either use a forked process or a thread to pass the request to. Apache has two modes of running, one is worker.c and the other is pre-fork. In pre-fork Apache uses processes that have been forked. In worker.c it uses threads. Threads consumes less resources, but is more complex. Since pre-fork is by default, let’s say Apache forked off a process to handle our request.

If you are asked this question of ‘What happens when you type in in your browser?’, the above is a good start. At various points in the conversation you can talk more in depth about a particular topic. For instance with networking you can break down further and go into TCP/UDP differences and also talk about routing protocols such as RIP, OSPF, BGP. When it comes to DNS you can talk about the SOA record, what it means, and also about the other DNS records. Additionally you can talk about how to setup a BIND server. In regards to HTTP you can talk about how to setup Apache, difference between encrypted and un-encrypted traffic, and also go more in depth about SSL.

CentOS CPU Scaling

CPU Scaling allows the processor to adjust speed on demand. CPUfreq governor defines the speed and power usage of a processor. The different types of governors are:

cpufreq_performance – for heavy workloads, always uses the highest cpu frequency, cost is power
cpufreq_powersave – uses the lowest cpu frequency, provides the most power savings, cost is performance
cpufreq_ondemand – adjusts cpu frequency based on need, can save power when system is idle, while ramping up when system is not idle, cost is latency while switching
cpufreq_userspace – allows any process running as root to set the frequency, most configurable
cpufreq_conservative – similar to ondemand, however unlike the ondemand governor which switches between lowest and highest, conservative performs gradual change

To use cpu governor run ‘sudo yum install cpupowerutils -y’.
To view available governors use ‘cpupower frequency-info –governors’
To add a particular driver use ‘modprobe ‘, as in ‘modprobe cpufreq_ondemand’.
To enable the given driver use ‘cpupower frequency-set –governor [governor]’, as in ‘cpupower frequency-set –governor cpufreq_ondemand’
To view CPU speed and policy use ‘ cpupower frequency-info’.
To set a frequency use ‘cpupower frequency-set’.

To view available drivers for cpu scaling checking ‘ls /lib/modules/[kernel version]/kernel/arch/[architecture]/kernel/cpu/cpufreq/’ should be a good start.

For additional information view https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Power_Management_Guide/cpufreq_governors.html.

Understanding IP addresses

In IPv4 there are 5 different IP address classes, Class A,B,C,D,E. Keep in mind that when figuring out number of hosts, we always subtract 2, all 0’s for network and all 1’s for broadcast.

  • Class A – 0 to 127. 2^8 networks, 2^24-2 hosts.
  • Class B – 128 to 191. 2^16 networks, 2^16-2 hosts.
  • Class C – 192 to 223. 2^24 networks, 2^8-2 hosts.
  • Class D – 224 – 239. Networks and hosts not defined.
  • Class E – 240 – 255. Networks and hosts not defined.
  • Let’s take an example, network 10.0.0.0/8, and we have to figure out the subnet id, IP range and broadcast address. Since it’s /8 the netmask is 255.0.0.0 because only 8 bits are being used for network, and 24 its are being used for host. So the first IP is 10.0.0.1 and the last IP is 10.255.255.254. The broadcast address will be all 1’s in the host field, which would translate to 10.255.255.255. The subnet address is all 0’s in the host field, which is 10.0.0.0.

Linux Configuration management requirements

There are a number of Linux configuration management solutions available for Linux hosts, such as Chef, CFEngine, Puppet, and Salt.

CFEngine has been around for a while and is the grand daddy of configuration management.
Salt is relatively new to the field. Salt is written primarily in Python.
Puppet is written in Ruby and is very popular as well as mature.
Chef is written in Ruby and Erlang, it is quite popular among cloud providers.

Features that a configuration management solution should ideally have:

* Client policy enforcement: If you have end users that have administrative access on clients to modify the system, and you have a need to keep configuration consistent across systems, then a CM solution can help over-ride end user changes. For instance, let’s say that you need to ensure that a Unix group called ‘admins’ always has sudo access, however you have a user who removes this group from his desktop since he is not sure about it’s purpose. A good CM will run periodically and over-write the end user changes, always ensuring that the ‘admins’ group exists.

* Fast updates is a must for large scale deployments. As the size of a deployment grows, the speed of pushing updates should not slow down. Clients may get updates from the servers every 30 minutes by default, however this value should be able to be changed to say 5 minutes or less as is needed.

* Security is paramount, since you do not want clients to download configuration from the servers to which they are not entitled. As such there should be some mechanism of client authentication, perhaps client based certificates that the server can validate from a certificate authority.

* Encryption of communication between client and server is also another consideration. If for instance the server is sending over a password file with encrypted password’s, you don’t want this to be snooped. Some sort of SSL or perhaps TLS encryption should be used between the client and server.

* Local updates merging is another feature that may be required. For instance if a password file is pushed from the server, and I need to add a user to the password file temporarily without making changes to the configuration management repository, then I should be able to add a user which the CM system respects for a given amount of time.

* Ability to roll back updates is also important on the client. In case I make a mistake that gets pushed to all the clients, I should be able to say ‘go back to previous working version’.

* Dry run of configuration changes. This will allow an service engineer to test the changes before actually pushing them out.

* Reporting capabilities for end users and also administrators. It should be easy for the CM administrators to get reports on numbers of clients, and also versions of client software.

* The administrator should be able to apply updates on a rolling basis without any downtime. Also, if clients are offline, they should be able to update themselves when they are able to reach the server the next time around.

* Easy to use, let’s not forget this feature! The language in which configuration is specified should ideally be one that is easy to create directives in. For instance, it is not hard to find folks who dislike YAML or XML.

Which configuration management system do you use and why? Share you comments in this blog.

Where do public IP addresses come from?

Ever wonder where public IP’s come from? The Internet Assigned Numbers Authority or IANA (http://www.iana.org) is responsible for the global coordination of IP addresses. IANA coordinates with 5 regional internet registries or RIR:

* ARIN (North America)
* AfriNIC (Africa)
* APNIC (Asia/Pacific Region)
* LACNIC (Latin America and some Caribbean Islands)
* RIPE NIC (Europe, Middle East, Central Asia)

IANA also administers the 13 root DNS servers. IANA is a department of ICANN (http://en.wikipedia.org/wiki/ICANN) or Internet Corporation for Assigned Names and Numbers.

Public IPv4 and IPv6 IP addresses are assigned by ARIN or the American Registry for Internet Numbers (http://www.arin.net) for Canada, many Caribbean and North Atlantic islands, and the United States. ARIN in turn then assigns IP space to large ISP’s who will then give them out to their customers. ARIN is a nonprofit organization.

How has your experience been in trying to get new IP’s from an ISP? Share your comments in this blog.

Adding users to a Mysql database

There are numerous ways of adding a user to MySQL, one method is explained below.

Create a user called blah who can connect from any host with the password of f#4fsFF334@*.

mysql>; create user 'blah'@'%' identified by 'f#4fsFF334@*';
Query OK, 0 rows affected (0.02 sec)

Grant the user ability to run the select command on the test database. test.* indicates all tables that are part of test database.

mysql>; grant select on test.* to 'blah'@'%';
Query OK, 0 rows affected (0.00 sec)

Flush privileges is required if you want the user addition to be immediate.

mysql>; flush privileges;
Query OK, 0 rows affected (0.00 sec)

Verify that the user has permissions on the test database. The 2nd line which shows ‘GRANT SELECT’ confirms the privileges. The first line shows that the user has no permissions by default.

mysql>; show grants for 'blah'@'%';
+-----------------------------------------------------------------------------------------------------+
| Grants for blah@% |
+-----------------------------------------------------------------------------------------------------+
| GRANT USAGE ON *.* TO 'blah'@'%' IDENTIFIED BY PASSWORD '*7D3D76DCFC5842A5CDF9E2F01D18D3C4647A5400' |
| GRANT SELECT ON `test`.* TO 'blah'@'%' |
+-----------------------------------------------------------------------------------------------------+
2 rows in set (0.00 sec)

If you want to delete the user, you can simply run ‘drop’:

mysql>; drop user 'blah'@'%';
Query OK, 0 rows affected (0.00 sec)

How do you manage MySQL users? Share your comments in this blog.

Improving Apache performance

Remove modules
Apache include many modules that are enabled, and you may not be needing all of them, disable the ones you do not need. (By default in CentOS when you install Apache, over 50 modules are loaded.) This will help speed up Apache. The modules that are loading are listed in /etc/httpd/conf/httpd.conf and the line begins with the word LoadModule, as in:

LoadModule auth_basic_module modules/mod_auth_basic.so

For instance if you don’t use LDAP to authenticate with Apache, you can disable the authnz_ldap_module module.
If you are unsure about disabling an Apache module whose name is perhaps not self explanatory, you can take a look here http://httpd.apache.org/docs/2.2/mod/ for a more detailed description.

DNS tunning
Each DNS lookup takes up time, so make sure that Apache is not doing hostname lookups, you can enable this feature with the following directive ‘HostnameLookups Off’. This is normally off by default.

Don’t use htaccess if there is no need to
Use ‘AllowOverride None’, since allowing override will force Apache to look for .htaccess file, which you may not be using. This will speed up Apache, since it’s one less thing that Apache has to do before serving content.

Avoid content negotiation
When you access the root directory of a web server, Apache usually looks for an index file which is basically the ‘home-page’ of a web server. This file can have various names such as index, index.html, etc. You can specify the exact filename so that Apache does not have to look for different files. So replace ‘DirectoryIndex index’ with ‘DirectoryIndex index.cgi index.pl index.shtml index.html’

How do you improve Apache’s performance? Share your comments in the blog.

Reference
http://httpd.apache.org/docs/current/misc/perf-tuning.html

Understanding rsyslog.conf

rsyslog is the logging daemon used by CentOS and RedHat. A number of Linux applications use rsyslog to send logging output to, including the Linux kernel. Rsyslog runs as /sbin/rsyslogd and it’s configuration file is /etc/rsyslog.conf.
Rsyslog is a full replacement of syslog and is more fully featured.

rsyslog has a modular design which supports over a dozen modules, the two most common ones are specified in /etc/rsyslog.conf as:

#UDP logging
$ModLoad imudp 
$UDPServerRun 514

#TCP logging
$ModLoad imtcp
$InputTCPServerRun 514

Lines starting with ‘#’ are ignored in /etc/rsyslog.conf.
Global directives start with $ on their own line.
Templates allow you to specify the format of the logged message. By default rsyslog logs output in the standard syslog format. To change the format use the template directive as in ‘$template RFC3164fmt,”%TIMESTAMP% %HOSTNAME% %syslogtag%%msg%”‘. This will output syslog messages in the format specified in RFC3164. The RFC3164ftm is the name given to this template, although you can call it anything else you want, what matters is the actual format in double quotes.

Rules are specified on what action to take with a selector and an action in rsyslog.conf. A selector is a combination of facility and priority.

Facility can be any of the following: auth, authpriv, cron, daemon, kern, lpr, mail, mark, news, syslog, user, uucp and local0 through local7. Facility is the subsystem that produced the log, for instance kern is the kernel produced log messages.

Priority in ascending order can be: debug, info, notice, warning, warn (same as warning), err, error (same as err), crit, alert, emerg, panic (same as emerg). Severity of the message is defined with priority.

Action is what to do with the message, for instance to output to a log file. An example of selector and action would be ‘kern.* /dev/console’ which means send all kernel messages with any priority to /dev/console.

I have only covered some of the options of rsyslog, for more information you many want to run ‘man rsyslog.conf’.

Have you done anything fancy with rsyslog or do you use the stock config? Share your comments in this blog.

%d bloggers like this: