Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - admin

Pages: 1 [2] 3 4 ... 21
16
Host Talk / 55 SEO tips
« on: June 05, 2011, 07:44:13 PM »

SEO Tips

1. If you absolutely MUST use Java script drop down menus, image maps or image links, be sure to put text links somewhere on the page for the spiders to follow.

2. Content is king, so be sure to have good, well-written and unique content that will focus on your primary keyword or keyword phrase.

3. If content is king, then links are queen. Build a network of quality backlinks using your keyword phrase as the link. Remember, if there is no good, logical reason for that site to link to you, you don’t want the link.

4. Don’t be obsessed with PageRank. It is just one isty bitsy part of the ranking algorithm. A site with lower PR can actually outrank one with a higher PR.

5. Be sure you have a unique, keyword focused Title tag on every page of your site. And, if you MUST have the name of your company in it, put it at the end. Unless you are a major brand name that is a household name, your business name will probably get few searches.

6. Fresh content can help improve your rankings. Add new, useful content to your pages on a regular basis. Content freshness adds relevancy to your site in the eyes of the search engines.

7. Be sure links to your site and within your site use your keyword phrase. In other words, if your target is “blue widgets” then link to “blue widgets” instead of a “Click here” link.

8. Focus on search phrases, not single keywords, and put your location in your text (“our Palm Springs store” not “our store”) to help you get found in local searches.

9. Don’t design your web site without considering SEO. Make sure your web designer understands your expectations for organic SEO. Doing a retrofit on your shiny new Flash-based site after it is built won’t cut it. Spiders can crawl text, not Flash or images.

10. Use keywords and keyword phrases appropriately in text links, image ALT attributes and even your domain name.

11. Check for canonicalization issues – www and non-www domains. Decide which you want to use and 301 redirect the other to it. In other words, if http://www.domain.com is your preference, then http://domain.com should redirect to it.

12. Check the link to your home page throughout your site. Is index.html appended to your domain name? If so, you’re splitting your links. Outside links go to http://www.domain.com and internal links go to http://www.domain.com/index.html.

Ditch the index.html or default.php or whatever the page is and always link back to your domain.

13. Frames, Flash and AJAX all share a common problem – you can’t link to a single page. It’s either all or nothing. Don’t use Frames at all and use Flash and AJAX sparingly for best SEO results.

14. Your URL file extension doesn’t matter. You can use .html, .htm, .asp, .php, etc. and it won’t make a difference as far as your SEO is concerned.

15. Got a new web site you want spidered? Submitting through Google’s regular submission form can take weeks. The quickest way to get your site spidered is by getting a link to it through another quality site.

16. If your site content doesn’t change often, your site needs a blog because search spiders like fresh text. Blog at least three time a week with good, fresh content to feed those little crawlers.

17. When link building, think quality, not quantity. One single, good, authoritative link can do a lot more for you than a dozen poor quality links, which can actually hurt you.

18. Search engines want natural language content. Don’t try to stuff your text with keywords. It won’t work. Search engines look at how many times a term is in your content and if it is abnormally high, will count this against you rather than for you.

19. Not only should your links use keyword anchor text, but the text around the links should also be related to your keywords. In other words, surround the link with descriptive text.

20. If you are on a shared server, do a blacklist check to be sure you’re not on a proxy with a spammer or banned site. Their negative notoriety could affect your own rankings.

21. Be aware that by using services that block domain ownership information when you register a domain, Google might see you as a potential spammer.

22. When optimizing your blog posts, optimize your post title tag independently from your blog title.

23. The bottom line in SEO is Text, Links, Popularity and Reputation.

24. Make sure your site is easy to use. This can influence your link building ability and popularity and, thus, your ranking.

25. Give link love, Get link love. Don’t be stingy with linking out. That will encourage others to link to you.

26. Search engines like unique content that is also quality content. There can be a difference between unique content and quality content. Make sure your content is both.

27. If you absolutely MUST have your main page as a splash page that is all Flash or one big image, place text and navigation links below the fold.

28. Some of your most valuable links might not appear in web sites at all but be in the form of e-mail communications such as newletters and zines.

29. You get NOTHING from paid links except a few clicks unless the links are embedded in body text and NOT obvious sponsored links.

30. Links from .edu domains are given nice weight by the search engines. Run a search for possible non-profit .edu sites that are looking for sponsors.

31. Give them something to talk about. Linkbaiting is simply good content.

32. Give each page a focus on a single keyword phrase. Don’t try to optimize the page for several keywords at once.

33. SEO is useless if you have a weak or non-existent call to action. Make sure your call to action is clear and present.

34. SEO is not a one-shot process. The search landscape changes daily, so expect to work on your optimization daily.

35. Cater to influential bloggers and authority sites who might link to you, your images, videos, podcasts, etc. or ask to reprint your content.

36. Get the owner or CEO blogging. It’s priceless! CEO influence on a blog is incredible as this is the VOICE of the company. Response from the owner to reader comments will cause your credibility to skyrocket!

37. Optimize the text in your RSS feed just like you should with your posts and web pages. Use descriptive, keyword rich text in your title and description.

38. Use captions with your images. As with newspaper photos, place keyword rich captions with your images.

39. Pay attention to the context surrounding your images. Images can rank based on text that surrounds them on the page. Pay attention to keyword text, headings, etc.

40. You’re better off letting your site pages be found naturally by the crawler. Good global navigation and linking will serve you much better than relying only on an XML Sitemap.

41. There are two ways to NOT see Google’s Personalized Search results:

(1) Log out of Google

(2) Append &pws=0 to the end of your search URL in the search bar

42. Links (especially deep links) from a high PageRank site are golden. High PR indicates high trust, so the back links will carry more weight.

43. Use absolute links. Not only will it make your on-site link navigation less prone to problems (like links to and from https pages), but if someone scrapes your content, you’ll get backlink juice out of it.

44. See if your hosting company offers “Sticky” forwarding when moving to a new domain. This allows temporary forwarding to the new domain from the old, retaining the new URL in the address bar so that users can gradually get used to the new URL.

45. Understand social marketing. It IS part of SEO. The more you understand about sites like Digg, Yelp, del.icio.us, Facebook, etc., the better you will be able to compete in search.

46. To get the best chance for your videos to be found by the crawlers, create a video sitemap and list it in your Google Webmaster Central account.

47. Videos that show up in Google blended search results don’t just come from YouTube. Be sure to submit your videos to other quality video sites like Metacafe, AOL, MSN and Yahoo to name a few.

48. Surround video content on your pages with keyword rich text. The search engines look at surrounding content to define the usefulness of the video for the query.

49. Use the words “image” or “picture” in your photo ALT descriptions and captions. A lot of searches are for a keyword plus one of those words.

50. Enable “Enhanced image search” in your Google Webmaster Central account. Images are a big part of the new blended search results, so allowing Google to find your photos will help your SEO efforts.

51. Add viral components to your web site or blog – reviews, sharing functions, ratings, visitor comments, etc.

52. Broaden your range of services to include video, podcasts, news, social content and so forth. SEO is not about 10 blue links anymore.

53. When considering a link purchase or exchange, check the cache date of the page where your link will be located in Google. Search for “cache:URL” where you substitute “URL” for the actual page. The newer the cache date the better. If the page isn’t there or the cache date is more than an month old, the page isn’t worth much.

54. If you have pages on your site that are very similar (you are concerned about duplicate content issues) and you want to be sure the correct one is included in the search engines, place the URL of your preferred page in your sitemaps.

55. Check your server headers. Search for “check server header” to find free online tools for this. You want to be sure your URLs report a “200 OK” status or “301 Moved Permanently ” for redirects. If the status shows anything else, check to be sure your URLs are set up properly and used consistently throughout your site.



17
MySQL / mysql: Got a packet bigger than 'max_allowed_packet' bytes
« on: May 12, 2011, 01:40:22 AM »
If you got this error:
Code: [Select]
Got a packet bigger than 'max_allowed_packet' bytesThis means you need to increase the max_allowed_packet. You can do that by changing the my.cnf or my.ini file under the mysqld section and set max_allowed_packet=100M or you could run these commands in a mysql console connected to that same server:

Code: [Select]
set global net_buffer_length=1000000;
set global max_allowed_packet=1000000000;

(Use a very large value for the packet size.)

18
General Discussion / Outlook: How to Recall a Sent Message
« on: April 28, 2011, 11:17:18 AM »
Have you ever clicked send on a message and then remembered that you forgot to attach that important file, or realized you put the wrong time down for a meeting? Outlook allows you the option of recalling a sent message. Here’s how:

For Outlook 2003:

1. Go to the Sent Items folder.

2. Find the message you want recalled and double-click it.

3. Go to the Actions menu and select Recall This Message.

4. To recall the message:

Select Delete unread copies of this message.
(Note: the recipient needs to have Outlook opened for the message to be deleted)

To replace the message:

Select Delete unread copies and replace with a new message, click OK, and type your new message.

To be notified about the success of the recall or replacement:

Check the Tell me if recall succeeds or fails for each recipient check box.

5. Click OK.

How To Recall a Sent Message in Outlook 2007:

1. Click on Sent Items.

2. Find the message you want recalled and double-click it to open.

3. Go to the Ribbon.

4. In the Actions section, click Other Actions and select Recall This Message.

5. Select Delete unread copies of this message.

6. To be notified about the success of the recall, check the Tell me if recall succeeds or fails for each recipient checkbox.

7. Click OK.

19
Great offer!

20
Lunix & Unix / Re: How to: install ionCube loaders on Linux server
« on: April 17, 2011, 03:45:55 AM »
If you are using DirectAdmin you may use the following steps:

Code: [Select]
cd /root/
wget downloads2.ioncube.com/loader_downloads/ioncube_loaders_lin_x86-64.tar.gz
tar -zxvf ioncube_loaders_lin_x86-64.tar.gz
cd ioncube
mkdir /usr/local/ioncube/
cp ioncube_loader_lin_5.2.so /usr/local/ioncube/


Note: Replace x86-64 with i386 for a 32 bit OS


In php.ini:

Code: [Select]
zend_extension = /usr/local/ioncube/ioncube_loader_lin_5.2.so


21
This error:
Fatal error: Allowed memory size of 33554432 bytes exhausted (tried to allocate 19456 bytes)
means your php memory is only 32Mb and you should increase it by add:

Code: [Select]
php_value memory_limit 128M

in your .htaccess

22
Windows / Clearing Your DNS Cache
« on: April 04, 2011, 09:45:29 PM »

Clearing Your DNS Cache

Windows® XP, 2000, or Vista®
1.Open the Start menu.
2.Go to Run.
◦If you do not see the Run command in Vista, search for "run" in the Search bar.
3.In the Run text box, type: ipconfig /flushdns
4.Press Enter or Return, and your cache will be flushed.

MacOS®
1.Go to Applications.
2.Go to Utilities.
3.Open the Terminal application.
4.Type: dscacheutil -flushcache
5.Press Enter or Return, and your cache will be flushed.

23
How to: Allow telnet and ssh through iptables under Linux

Q. I run both RHEL / CentOS Linux server and by default firewall blocked out everything including telnet / ssh access. How do I allow telnet - port 23 and ssh port 22 thought Linux iptables firewall ?

A.By default firewall rules stored at /etc/sysconfig/iptables location / file under CentOS / RHEL. All you have to do is modify this file to add rules to open port 22 or 23.

Login as the root user.

Open /etc/sysconfig/iptables file, enter:
Code: [Select]
# vi /etc/sysconfig/iptablesFind line that read as follows:
Code: [Select]
COMMITTo open port 22 (ssh), enter (before COMMIT line):

Code: [Select]
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPTTo open port 23 (telnet), enter (before COMMIT line):

Code: [Select]
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 23 -j ACCEPTSave and close the file. Restart the firewall:
Code: [Select]
# /etc/init.d/iptables restart

24
Lunix & Unix / How Do I Block an IP Address on My Linux server?
« on: April 02, 2011, 01:21:02 AM »
How do I block an IP address or subnet under Linux operating system?

In order to block an IP on your Linux server you need to use iptables tools (administration tool for IPv4 packet filtering and NAT) and netfilter firewall. First you need to log into shell as root user. To block an IP address you need to type the iptables command as follows:

Syntax to block an IP address under Linux
Code: [Select]
iptables -A INPUT -s IP-ADDRESS -j DROPReplace IP-ADDRESS with your actual IP address. For example, if you wish to block an ip address 65.55.44.100 for whatever reason then type the command as follows:
Code: [Select]
# iptables -A INPUT -s 65.55.44.100 -j DROPIf you have IP tables firewall script, add the above rule to your script.

If you just want to block access to one port from an ip 65.55.44.100 to port 25 then type command:
Code: [Select]
# iptables -A INPUT -s 65.55.44.100 -p tcp --destination-port 25 -j DROPThe above rule will drop all packets coming from IP 65.55.44.100 to port mail server port 25.

CentOS / RHEL / Fedora Block An IP And Save It To Config FileType the following two command:
Code: [Select]
# iptables -A INPUT -s 65.55.44.100 -j DROP
# service iptables save

How Do I Unblock An IP Address?
Use the following syntax (the -d options deletes the rule from table):
# iptables -D INPUT -s xx.xxx.xx.xx -j DROP
Code: [Select]
# iptables -D INPUT -s 65.55.44.100 -j DROP
# service iptables save

25
Lunix & Unix / Allow IP to access your Firewall (iptables)
« on: April 02, 2011, 01:14:52 AM »
If you have two servers and you want have them connected to each other, then you have to allow access for there IPs on each other through your firewall.
Say your sever1 IP is: 174.140.168.162
Then you can run on server2:
 
Code: [Select]
iptables -A INPUT   -j ACCEPT -p all -s 174.140.168.162
iptables -A OUTPUT  -j ACCEPT -p all -d 174.140.168.162
service iptables save
and you can check the result by:
Code: [Select]
iptables -L  -n -v --line-numbers

26
Lunix & Unix / How to: install ionCube loaders on Linux server
« on: March 28, 2011, 11:55:41 PM »
IonCube loader is the PHP extension that decodes encrypted PHP files at runtime. Most commercial hosting companies have ionCube loader installed already. But if this is not your case, here's how to install the loader.

Steps:

    * Login via SSH to your server as root

    * Download the loader from ionCube:

     
Code: [Select]
wget http://downloads2.ioncube.com/loader_downloads/ioncube_loaders_lin_x86.tar.bz2
      Wait until its finished:

Code: [Select]
      --09:20:50--  http://downloads2.ioncube.com/loader_downloads/ioncube_loaders_lin_x86.tar.bz2
      Resolving downloads2.ioncube.com... 72.9.241.122
      Connecting to downloads2.ioncube.com|72.9.241.122|:80... connected.
      HTTP request sent, awaiting response... 200 OK
      Length: 2436435 (2.3M) [application/x-bzip2]
      Saving to: `ioncube_loaders_lin_x86.tar.bz2'

      100%[====================================================================================================================>] 2,436,435   25.8K/s   in 93s

      09:22:26 (25.6 KB/s) - `ioncube_loaders_lin_x86.tar.bz2' saved [2436435/2436435]
    * Extract the loader package:

     
Code: [Select]
tar xvjf ioncube_loaders_lin_x86.tar.bz2
    * Check your PHP version:

     
Code: [Select]
php -v
      You'll see something like this:

   
Code: [Select]
  [root@]# php -v
      PHP 5.1.6 (cli) (built: May 24 2008 14:07:53)
      Copyright (c) 1997-2006 The PHP Group
      Zend Engine v2.1.0, Copyright (c) 1998-2006 Zend Technologies

      So, my PHP version is: 5.1
      The ionCube module I'll be used is: ioncube_loader_lin_5.1.so

    * Copy the ionCube module to PHP modules directory.

     
Code: [Select]
cp ioncube/ioncube_loader_lin_5.1.so /usr/lib/php/modules/
    * Add the module into PHP configuration file

     
Code: [Select]
nano /etc/php.d/ioncube.ini
    * Add the following line:

     
Code: [Select]
zend_extension = /usr/lib/php/modules/ioncube_loader_lin_5.1.so
    * Save and exit from nano.

     
Code: [Select]
Ctrl + O
      Ctrl + X
    * Restart Apache

     
Code: [Select]
service restart httpd

      OR

     
Code: [Select]
/etc/init.d/httpd restart
    * Verify the module:

     
Code: [Select]
php -v

      You'll see something like these:

   
Code: [Select]
  [root@]# php -v
      PHP 5.1.6 (cli) (built: May 24 2008 14:07:53)
      Copyright (c) 1997-2006 The PHP Group
      Zend Engine v2.1.0, Copyright (c) 1998-2006 Zend Technologies
          with the ionCube PHP Loader v3.3.11, Copyright (c) 2002-2010, by ionCube Ltd.
    * Done.

Good luck!

27
If you have a lot of visitors to your website and your Apache server is falling over, and you got the following error in Apache log:
Code: [Select]
server reached MaxClients setting, consider raising the MaxClients settingthen you need to increase the MaxClients value, for more details please see:
http://developers-heaven.net/forum/index.php/topic,2452.0.html

28
This article is of interest to you if:
- You run your site(s) on a dedicated server.
- You run a Un*x system.
- You run Apache as your HTTP server.
- You have a lot of traffic, are experiencing performance issues, or anticipating them.

In this article I will explain how to install Mathopd as a static file/image serving HTTP server, and why it will be beneficial.


Understanding Apache's Forking system

For you to understand why what we are going to do is useful, it is first necessary that you understand (from a high level) how apache works.

Basically, Apache handles a request with a process (known as a child). Each simultaneous request is handled by a different process. Therefore, the more simultaneous connections to your server are open, the more child processes Apache will create to handle these connections.
This is why, when you run the command top (or ps), you can see a number of httpd processes running on your server:



If you look into your apache's configuration file (httpd.conf), a number of parameters allow you to manipulate the way this process generation works:

MaxClients defines the max number of child processes that can be created by apache (hence the amx number of simultaneous requests served).
StartServers defines the number of processes that are created by default when apache is started.

Apache doesn't wait for incoming requests to create the processes that will treat them. This creation process takes time, and this would result in bad response times. Instead, it creates the processes in advance, and keeps them running waiting for a new connection. The variables MinSpareServers and MaxSpareServers define the min and max numbers of processes that must be created and waiting for incoming connections.

So the basic life of a child process is:

1) Apache creates the child process because there are less than MinSpareServers waiting for connections.
2) The child process waits for a connection.
3) A connection is made. The child process answers the request.
4) The child process goes back to the wait status to serve a new request.

Processes can be deleted for multiple reasons, for example because there are more than MaxSpareServers waiting, or more than MaxRequestsPerChild requests were already served by this process.

KeepAlive is a mechanism implemented in apache (and many other http servers) in order to keep a child process that has served a request from immediately closing the connection to that request's sender client. Instead, the connection will be "kept alive" for a period of KeepAliveTimeOut seconds, after which the connection will be closed. This is very interesting because opening a connection is a costly mechanism, and because it is very common that a client sends a group of requests in a short period of time.
For example, when you download a web page, you send a request for the page itself, and one request for each file referenced on this page (images, css, javascript, etc.). Without KeepAlive, you would open a new connection to apache for everyone of these requests, and they would all be served by multiple child processes. With KeepAlive on, all requests made when you connect to a page are served over the same TCP connection by the same child process, which allows significant speed inprovements.

If your server has mod-status installed, you can check your server status and view the states of the various active processes:






Fork vs Select

The forking mechanism of Apache (it "forks" new processes), used in combination with KeepAlive, has many advantages, the main being its robustness. By serving every request within a different process, you are sure that one of these request failing for some reason will never crash the whole server.

However when traffic increases and your system starts becoming busy, some problems appear with this mechanism.
Because each apache child process uses CPU and memory, the more connections you have, the more CPU and memory are necessary.
While KeepAlive is necessary, because of the way is works, a visitor or a search engine robot only viewing a page and leaving, or an image remotely linked will result in a child process being used and unavailable to serve other requests for a duration of KeepAliveTimeOut. The more traffic your server gets, the more processes will be created, and the more you risk to face a situation where your MaxClients is reached.

Why not just make MaxClients higher then? will you ask. Well first, the default max value is 255. You need to edit apache's source and recompile to make this value higher. Also, because of the inherent memory and CPU consumption of each child process, allowing more than is reasonable creates the risk of running out of resources. You need to think as well about the case where all these clients would be actual users making regular requests to some of your scripts, these may result in as many SQL connections and could result in your database running out of resources (basically, MaxClients is an easy way to limit incoming connections to your database).

A temporary solution is to reduce the KeepAliveTimeOut value, in order to release child processes more quickly after they have served a client. However this is more a quick fix and with traffic growth it is likely that you run into the same problem later.

Here is a better solution that will not only prove more scalable, but also significantly reduce the resources usage on your server: use a different HTTP server to serve your static files.
In this article we will see how to install and configure Mathopd, but there are other options as per what lightweight HTTP server could be chosen, such as thttpd or Boa.

The main idea behind these servers is to use a different mechanism to serve requests: instead of forking processes to serve requests, all requests are served from the same process and rely on the select() and sendfile() system calls.
This results in a much better performance when serving static files (which are treated directly by the kernel instead of read and written by the server as with apache), and a much lower overall resource usage (due to the fact that only one process is running). Using them only for static files makes the risk of the process crashing very low.

Our strategy here will be to serve all static files from this lightweight HTTP server, and use apache only to serve scripts. By doing this, we now face a situation in which a client will most of the time send only one request to apache at a time, and we can turn keepalive off, which will result in a performance improvement on apache's side as well.



Preparing the server

We will take the example of serving scripts from the regular www.example.com subdomain, and the static files from static.example.com.
First, we want both HTTP servers to be listening on the port 80. This is because many firewalls will not allow any fancy port and you don't want the visitors from behind those firewalls not to get the static files. So you will need 2 different IP addresses configured on your server.
If your server does not come with 2 IP addresses, get one from your host, and have the DNS configured so that it reverses to static.example.com.
Then install the IP address on your server (we assume that 111.222.33.43 was your primary IP and 111.222.33.44 is your new IP):


Code:
# cd /etc/sysconfig/network-scripts/# cp ifcfg-eth0 ifcfg-eth0:1
Then edit ifcfg-eth0:1 so that it looks like (replace 111.222.33.44 with your new ip address):


Code:
# more ifcfg-eth0:1DEVICE=eth0:1BOOTPROTO=staticIPADDR=111.222.33.44NETMASK=255.255.255.0ONBOOT=yes
Then, edit your hosts file:


Code:
# more /etc/hosts127.0.0.1              localhost.localdomain localhost111.222.33.43          my.server.name111.222.33.44          static.example.com
Restart your network

Code:
# /etc/rc.d/init.d/network restart
You need to update your DNS server with the new alias. If you are using bind, this typically means adding a line in /var/named/example.com.hosts with:

Code:
static.example.com.   IN      A       111.222.33.44
Restart named:

Code:
# /etc/rc.d/init.d/named restart
We're almost ready. Last thing we need to make sure is that Apache is now binded to the primary IP:
Edit the BindAddress parameter in httpd.conf:


Code:
BindAddress 111.222.33.43


Installing Mathopd

We can now install mathopd:


Code:
# mkdir mathopdsrc# cd mathopdsrc# wget http://www.mathopd.org/dist/mathopd-1.5p5.tar.gz# gzip -d mathopd-1.5p5.tar.gz# tar xvf mathopd-1.5p5.tar# cd mathopd-1.5p5/src# make
That's it! Note the mathopd tar is only 60kB. Now that's what you may call lightweight .



Configuring Mathopd

We now need to configure mathopd properly. You can view the sample config file provided in the distribution to get the detailed information. Here's the most important things to edit in /usr/local/etc/mathopd.conf:

In Tuning, set NumConnections to the max number of concurrent clients you may expect. Mathopd implements KeepAlive with a mechanism called clobbering. Basically, Mathopd will keep alive all connections, and if NumConnections is reached, will kill idle connections to handle new requests. This is an excellent feature, because unlike apache, you cannot have clients locked out because of idle connections waiting in the KeepAlive state. If a new client comes, mathopd will always be able to serve it, unless all its connections are actually active (reading/writing). If you ever face this situation, you only have to set NumConnections to a higher value which should have minimal impact on performance (mathopd uses very little memory for each connection).

You may configure the Log and LogFormat chapters depending on what you wish to log. If you want optimum performance, you may completely disable logging by having the Log entry set to /dev/null .

You can bind your server to the proper IP address by adding a server entry:


Code:
Server {        # here the IP you want to bind mathopd with        Address 111.222.33.44        Virtual {                AnyHost                Control {                   # here we map a request to static.example.com/ to a specific folder                   # this is similar to apache's document root                        Alias /                        Location /home/example.com/www/                   # here you can add extra headers to your files                        ExtraHeaders { "Cache-Control: max-age=7200, must-revalidate" }                }        }}
As shown in the example above, it is very easy to add expire headers to the HTTP responses, which is probably something you want to do if your images and static files don't change very often, in order to save some bandwidth.



Starting Mathopd

Now we're done with configuration, all we have left to do is start mathopd.


Code:
# /usr/local/mathopd/mathopd -f /usr/local/etc/mathopd.cfg


Update your scripts

You will now need to update your site and forum scripts, so that images are referenced with the new url http://static.example.com/(oldpath) Depending on your scripts complexity and size, and the amount of images used, this could be the longest step of the intallation.

You may then set KeepAlive off in Apache, and restart the server.


Code:
# /etc/rc.d/init.d/httpd restart


Done!

And there you go, mathopd should now be serving all static files.
You'll note that only one process appears in the list.



You can monitor mathopd's status by sending a SIGWINCH signal to its process:


Code:
kill -SIGWINCH `cat /var/run/mathopd.pid`
This will create a dump file in /tmp with similar information as apache's mod-status:


Code:
# more /tmp/mathopd-17756-dump*** Dump performed at 1117542000.877746Uptime: 686399 secondsActive connections: 150 out of 150Max simultaneous connections since last dump: 150Forked child processes: 0Exited child processes: 0Requests executed: 52706071Accepted connections: 50667648Pipelined requests: 119CPU time used by this process:    65262.39                     children:        0.00Connections:-----W--------W--------------W------------W---------W-----W------------W----------W-------------------------------R-------------------------R---------Reading: 2, Writing: 8, Waiting: 140, Forked: 0*** End of dump


Conclusion

This article showed how you could install and configure Mathopd on your server, in order to serve all static files with this lightweight HTTP server. If your site is busy, doing so may improve performance drastically on your site, for the following reasons:

- Mathopd serves static files much faster than Apache.
- Mathopd uses a single process and can serve many more requests to static files than Apache using much less resources.
- Once you have set up a lightweight HTTP to serve all static files, you may disable KeepAlive in Apache, which will also result in better performance.

Mathopd is very robust, I am currently using it to serve several hundred requests per second and have yet to see it create any problems.

The installation and setup may seem a bit complex if you're new to Un*x systems, but the benefits it may bring are definitely worth the effort, and can probably delay or even save a server upgrade when you thought it was needed.

29
Lunix & Unix / Find all 777 directories
« on: March 12, 2011, 12:49:13 PM »
To find all folders that any user can read and write to, i.e the folders with permission 777, you can use the following commaned:
Code: [Select]
find /home/ -type d -perm 777

30
Lunix & Unix / List new files in Unix/Linux
« on: March 12, 2011, 12:43:58 PM »
To Find and display files last modified les than 90 days ago.

Code: [Select]
find . -name "*" -mtime -90 -print

Pages: 1 [2] 3 4 ... 21