Quantcast
Channel: AskApache
Viewing all 58 articles
Browse latest View live

THE Ultimate Htaccess

$
0
0

AskApache.com

.htaccess is a very ancient configuration file that controls the Web Server running your website, and is one of the most powerful configuration files you will ever come across. Htaccess has the ability to control access of the WWW's HyperText Transfer Protocol (HTTP) using Password Protection, 301 Redirects, rewrites, and much much more. This is because this configuration file was coded in the earliest days of the web (HTTP), for one of the first Web Servers ever! Eventually these Web Servers (configured with htaccess) became known as the World Wide Web, and eventually grew into the Internet we use today.

.htaccess file tutorialThis is not an introduction to .htaccess… This is the evolution of the best of the best.

You've come to the right place if you are looking to acquire mad skills for using .htaccess files.

Originally (2003) this guide was known in certain hacker circles and hidden corners of the net as an ultimate .htaccess due to the powerful htaccess tricks and tips to bypass security on a webhost, and also because many of the tricks and examples were pretty impressive back then in that group.

Htaccess - Evolved

The Hyper Text Transfer Protocol (HTTP) was initiated at the CERN in Geneve (Switzerland), where it emerged (together with the HTML presentation language) from the need to exchange scientific information on a computer network in a simple manner. The first public HTTP implementation only allowed for plain text information, and almost instantaneously became a replacement of the GOPHER service. One of the first text-based browsers was LYNX which still exists today; a graphical HTTP client appeared very quickly with the name NCSA Mosaic. Mosaic was a popular browser back in 1994. Soon the need for a more rich multimedia experience was born, and the markup language provided support for a growing multitude of media types.

Htaccess file know-how will do several things for you:

  • Make your website noticeably faster.
  • Allow you to debug your server with ease.
  • Make your life easier and more rewarding.
  • Allow you to work faster and more productively.

AskApache Htaccess Journey

Skip this - still under edit

I discovered these tips and tricks mostly while working as a network security penetration specialist hired to find security holes in web hosting environments. Shared hosting is the most common and cheapest form of web-hosting where multiple customers are placed on a single machine and "share" the resources (CPU/RAM/SPACE). The machines are configured to basically ONLY do HTTP and FTP. No shells or any interactive logins, no ssh, just FTP access. That is when I started examining htaccess files in great detail and learned about the incredible untapped power of htaccess. For 99% of the worlds best Apache admins, they don't use .htaccess much, if AT ALL. It's much easier, safer, and faster to configure Apache using the httpd.conf file instead. However, this file is almost never readable on shared-hosts, and I've never seen it writable. So the only avenue left for those on shared-hosting was and is the .htaccess file, and holy freaking fiber-optics.. it's almost as powerful as httpd.conf itself!

Most all .htaccess code works in the httpd.conf file, but not all httpd.conf code works in .htaccess files, around 50%. So all the best Apache admins and programmers never used .htaccess files. There was no incentive for those with access to httpd.conf to use htaccess, and the gap grew. It's common to see "computer gurus" on forums and mailing lists rail against all uses and users of .htaccess files, smugly announcing the well known problems with .htaccess files compared with httpd.conf - I wonder if these "gurus" know the history of the htaccess file, like it's use in the earliest versions of the HTTP Server- NCSA's HTTPd, which BTW, became known as Apache HTTP. So you could easily say that htaccess files predates Apache itself.

Once I discovered what .htaccess files could do towards helping me enumerate and exploit security vulnerabilities even on big shared-hosts I focused all my research into .htaccess files, meaning I was reading the venerable Apache HTTP Source code 24/7! I compiled every released version of the Apache Web Server, ever, even NCSA's, and focused on enumerating the most powerful htaccess directives. Good times! Because my focus was on protocol/file/network vulnerabilites instead of web dev I built up a nice toolbox of htaccess tricks to do unusual things. When I switched over to webdev in 2005 I started using htaccess for websites, not research. I documented most of my favorites and rewrote the htaccess guide for webdevelopers. After some great encouragement on various forums and nets I decided to start a blog to share my work with everyone, AskApache.com was registered, I published my guide, and it was quickly plagiarized and scraped all over the net. Information is freedom, and freedom is information, so this blog has the least restrictive copyright for you. Feel free to modify, copy, republish, sell, or use anything on this site ;)

What Is .htaccess

Specifically, .htaccess is the default file name of a special configuration file that provides a number of directives (commands) for controlling and configuring the Apache Web Server, and also to control and configure modules that can be built into the Apache installation, or included at run-time like mod_rewrite (for htaccess rewrite), mod_alias (for htaccess redirects), and mod_ssl (for controlling SSL connections).

Htaccess allows for decentralized management of Web Server configurations which makes life very easy for web hosting companies and especially their savvy consumers. They set up and run "server farms" where many hundreds and thousands of web hosting customers are all put on the same Apache Server. This type of hosting is called "virtual hosting" and without .htaccess files would mean that every customer must use the same exact settings as everyone else on their segment. So that is why any half-decent web host allows/enables (DreamHost, Powweb, MediaTemple, GoDaddy) .htaccess files, though few people are aware of it. Let's just say that if I was a customer on your server-farm, and .htaccess files were enabled, my websites would be a LOT faster than yours, as these configuration files allow you to fully take advantage of and utilize the resources allotted to you by your host. If even 1/10 of the sites on a server-farm took advantage of what they are paying for, the providers would go out of business.

SKIP: History of Htaccess in 1st Apache.

One of the design goals for this server was to maintain external compatibility with the NCSA 1.3 server --- that is, to read the same configuration files, to process all the directives therein correctly, and in general to be a drop-in replacement for NCSA. On the other hand, another design goal was to move as much of the server's functionality into modules which have as little as possible to do with the monolithic server core. The only way to reconcile these goals is to move the handling of most commands from the central server into the modules.

However, just giving the modules command tables is not enough to divorce them completely from the server core. The server has to remember the commands in order to act on them later. That involves maintaining data which is private to the modules, and which can be either per-server, or per-directory. Most things are per-directory, including in particular access control and authorization information, but also information on how to determine file types from suffixes, which can be modified by AddType and DefaultType directives, and so forth. In general, the governing philosophy is that anything which can be made configurable by directory should be; per-server information is generally used in the standard set of modules for information like Aliases and Redirects which come into play before the request is tied to a particular place in the underlying file system.

Another requirement for emulating the NCSA server is being able to handle the per-directory configuration files, generally called .htaccess files, though even in the NCSA server they can contain directives which have nothing at all to do with access control. Accordingly, after URI -> filename translation, but before performing any other phase, the server walks down the directory hierarchy of the underlying filesystem, following the translated pathname, to read any .htaccess files which might be present. The information which is read in then has to be merged with the applicable information from the server's own config files (either from the <directory> sections in access.conf, or from defaults in srm.conf, which actually behaves for most purposes almost exactly like <directory />).

Finally, after having served a request which involved reading .htaccess files, we need to discard the storage allocated for handling them. That is solved the same way it is solved wherever else similar problems come up, by tying those structures to the per-transaction resource pool.

Creating Htaccess Files

What an Htaccess File Looks Like in Windows ExplorerHtaccess files use the default filename ".htaccess" but any unix-style file name can be specified from the main server config using the AccessFileName directive. The file isn't .htaccess.txt, its literally just named .htaccess.

View .htaccess filesIn a Windows Environment like the one I use for work, you can change how Windows opens and views .htaccess files by modifying the Folder Options in explorer. As you can see, on my computer files ending in .htaccess are recognized as having the HTACCESS extension and are handled/opened by Adobe Dreamweaver CS4.

Htaccess Scope

Unlike the main server configuration files like httpd.conf, Htaccess files are read on every request therefore changes in these files take immediate effect. Apache searches all directories and subdirectories that are htaccess-enabled for an .htaccess file which results in performance loss due to file accesses. I've never noticed a performance loss but OTOH, I know how to use them. If you do have access to your main server configuration file, you should of course use that instead, and lucky for you ALL the .htaccess tricks and examples can be used there as well (just not vice versa).

Htaccess File Syntax

Htaccess files follow the same syntax as the main Apache configuration files, for powerusers here's an apache.vim for VI. The one main difference is the context of the directive, which means whether or not that directive is ALLOWED to be used inside of an .htaccess file. Htaccess files are incredibly powerful, and can also be very dangerous as some directives allowed in the main configuration files would allow users/customers to completely bypass security/bandwidth-limits/resource-limits/file-permissions, etc.. About 1/4 of all Apache directives cannot be used inside an .htaccess file (also known as a per-directory context config). The Apache Developers are well-regarded throughout the world as being among some of the best programmers, ever. To enable a disallowed directive inside a .htaccess file would require modifying the source code and re-compiling the server (which they allow and encourage if you are the owner/admin).

Htaccess Directives

Don't ask why, but I personally downloaded each major/beta release of the Apache HTTPD source code from version 1.3.0 to version 2.2.10 (all 63 Apache versions!), then I configured and compiled each version for a custom HTTPD installation built from source. This allowed me to find every directive allowed in .htaccess files for each particular version, which has never been done before, or since. YES! I think that is so cool..

An .htaccess directive is basically a command that is specific to a module or builtin to the core that performs a specific task or sets a specific setting for how Apache serves your WebSite. Directives placed in Htaccess files apply to the directory they are in, and all sub-directories. Here's the 3 top links (official Apache Docs) you will repeatedly use, bookmark/print/save them.

htaccess Context Legend

  1. Terms Used to Describe Directives
  2. Official List of Apache Directives
  3. Directive Quick-Reference -- with Context

Main Server Config Examples

Now lets take a look at some htaccess examples to get a feel for the syntax and some general ideas at the capabilities. Some of the best examples for .htaccess files are included with Apache for main server config files, so lets take a quick look at a couple of them on our way down to the actual .htaccess examples further down the page (this site has thousands, take your time). The basic syntax is a line starting with # is a comment, everything else are directives followed by the directive argument.

httpd-multilang-errordoc.conf: The configuration below implements multi-language error documents through content-negotiation

Here are the rest of them if you wanna take a look. (httpd-mpm.conf, httpd-default.conf, httpd-ssl.conf, httpd-info.conf, httpd-vhosts.conf, httpd-dav.conf)


Example .htaccess Code Snippets

Here are some specific examples, this is the most popular section of this page. Updated frequently.

Redirect Everyone Except IP address to alternate page

ErrorDocument 403 http://www.yahoo.com/
Order deny,allow
Deny from all
Allow from 208.113.134.190

When developing sites

This lets google crawl the page, lets me access without a password, and lets my client access the page WITH a password. It also allows for XHTML and CSS validation! (w3.org)

AuthName "Under Development"
AuthUserFile /home/sitename.com/.htpasswd
AuthType basic
Require valid-user
Order deny,allow
Deny from all
Allow from 208.113.134.190 w3.org htmlhelp.com googlebot.com
Satisfy Any

Fix double-login prompt

Redirect non-https requests to https server and ensure that .htpasswd authorization can only be entered across HTTPS

SSLOptions +StrictRequire
SSLRequireSSL
SSLRequire %{HTTP_HOST} eq "askapache.com"
ErrorDocument 403 https://askapache.com

Set Timezone of the Server (GMT)

SetEnv TZ America/Indianapolis

Administrator Email for ErrorDocument

SetEnv SERVER_ADMIN webmaster@google.com

ServerSignature for ErrorDocument

ServerSignature off | on | email

Charset and Language headers

Article: Setting Charset in htaccess, and article by Richard Ishida

AddDefaultCharset UTF-8
DefaultLanguage en-US

Disallow Script Execution

Options -ExecCGI
AddHandler cgi-script .php .pl .py .jsp .asp .htm .shtml .sh .cgi

Deny Request Methods

RewriteCond %{REQUEST_METHOD} !^(GET|HEAD|OPTIONS|POST|PUT)
RewriteRule .* - [F]

Force "File Save As" Prompt

AddType application/octet-stream .avi .mpg .mov .pdf .xls .mp4

Show CGI Source Code

RemoveHandler cgi-script .pl .py .cgi
AddType text/plain .pl .py .cgi

Serve all .pdf files on your site using .htaccess and mod_rewrite with the php script.

RewriteEngine On
RewriteBase /
RewriteCond %{REQUEST_FILENAME} -f
RewriteRule ^(.+)\.pdf$  /cgi-bin/pdf.php?file=$1 [L,NC,QSA]

Rewrite to www

RewriteCond %{REQUEST_URI} !^/(robots\.txt|favicon\.ico|sitemap\.xml)$
RewriteCond %{HTTP_HOST} !^www\.askapache\.com$ [NC]
RewriteRule ^(.*)$ http://www.askapache.com/$1 [R=301,L]

Rewrite to www dynamically

RewriteCond %{REQUEST_URI} !^/robots\.txt$ [NC]
RewriteCond %{HTTP_HOST} !^www\.[a-z-]+\.[a-z]{2,6} [NC]
RewriteCond %{HTTP_HOST} ([a-z-]+\.[a-z]{2,6})$   [NC]
RewriteRule ^/(.*)$ http://%1/$1 [R=301,L]

301 Redirect Old File

Redirect 301 /old/file.html http://www.askapache.com/new/file.html

301 Redirect Entire Directory

RedirectMatch 301 /blog(.*) http://www.askapache.com/$1

Protecting your php.cgi

<FilesMatch "^php5?\.(ini|cgi)$">
Order Deny,Allow
Deny from All
Allow from env=REDIRECT_STATUS
</FilesMatch>

Set Cookie based on Request

This code sends the Set-Cookie header to create a cookie on the client with the value of a matching item in 2nd parantheses.

RewriteEngine On
RewriteBase /
RewriteRule ^(.*)(de|es|fr|it|ja|ru|en)/$ - [co=lang:$2:.askapache.com:7200:/]

Set Cookie with env variable

Header set Set-Cookie "language=%{lang}e; path=/;" env=lang

Custom ErrorDocuments

ErrorDocument 100 /100_CONTINUE
ErrorDocument 101 /101_SWITCHING_PROTOCOLS
ErrorDocument 102 /102_PROCESSING
ErrorDocument 200 /200_OK
ErrorDocument 201 /201_CREATED
ErrorDocument 202 /202_ACCEPTED
ErrorDocument 203 /203_NON_AUTHORITATIVE
ErrorDocument 204 /204_NO_CONTENT
ErrorDocument 205 /205_RESET_CONTENT
ErrorDocument 206 /206_PARTIAL_CONTENT
ErrorDocument 207 /207_MULTI_STATUS
ErrorDocument 300 /300_MULTIPLE_CHOICES
ErrorDocument 301 /301_MOVED_PERMANENTLY
ErrorDocument 302 /302_MOVED_TEMPORARILY
ErrorDocument 303 /303_SEE_OTHER
ErrorDocument 304 /304_NOT_MODIFIED
ErrorDocument 305 /305_USE_PROXY
ErrorDocument 307 /307_TEMPORARY_REDIRECT
ErrorDocument 400 /400_BAD_REQUEST
ErrorDocument 401 /401_UNAUTHORIZED
ErrorDocument 402 /402_PAYMENT_REQUIRED
ErrorDocument 403 /403_FORBIDDEN
ErrorDocument 404 /404_NOT_FOUND
 
ErrorDocument 405 /405_METHOD_NOT_ALLOWED
ErrorDocument 406 /406_NOT_ACCEPTABLE
ErrorDocument 407 /407_PROXY_AUTHENTICATION_REQUIRED
ErrorDocument 408 /408_REQUEST_TIME_OUT
ErrorDocument 409 /409_CONFLICT
ErrorDocument 410 /410_GONE
ErrorDocument 411 /411_LENGTH_REQUIRED
ErrorDocument 412 /412_PRECONDITION_FAILED
ErrorDocument 413 /413_REQUEST_ENTITY_TOO_LARGE
ErrorDocument 414 /414_REQUEST_URI_TOO_LARGE
ErrorDocument 415 /415_UNSUPPORTED_MEDIA_TYPE
ErrorDocument 416 /416_RANGE_NOT_SATISFIABLE
ErrorDocument 417 /417_EXPECTATION_FAILED
ErrorDocument 422 /422_UNPROCESSABLE_ENTITY
ErrorDocument 423 /423_LOCKED
ErrorDocument 424 /424_FAILED_DEPENDENCY
ErrorDocument 426 /426_UPGRADE_REQUIRED
ErrorDocument 500 /500_INTERNAL_SERVER_ERROR
ErrorDocument 501 /501_NOT_IMPLEMENTED
ErrorDocument 502 /502_BAD_GATEWAY
ErrorDocument 503 /503_SERVICE_UNAVAILABLE
ErrorDocument 504 /504_GATEWAY_TIME_OUT
ErrorDocument 505 /505_VERSION_NOT_SUPPORTED
ErrorDocument 506 /506_VARIANT_ALSO_VARIES
ErrorDocument 507 /507_INSUFFICIENT_STORAGE
ErrorDocument 510 /510_NOT_EXTENDED

Implementing a Caching Scheme with .htaccess

# year
<FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|swf|mp3|mp4)$">
Header set Cache-Control "public"
Header set Expires "Thu, 15 Apr 2010 20:00:00 GMT"
Header unset Last-Modified
</FilesMatch>
#2 hours
<FilesMatch "\.(html|htm|xml|txt|xsl)$">
Header set Cache-Control "max-age=7200, must-revalidate"
</FilesMatch>
<FilesMatch "\.(js|css)$">
SetOutputFilter DEFLATE
Header set Expires "Thu, 15 Apr 2010 20:00:00 GMT"
</FilesMatch>

Password Protect single file

<Files login.php>
AuthName "Prompt"
AuthType Basic
AuthUserFile /home/askapache.com/.htpasswd
Require valid-user
</Files>

Password Protect multiple files

<FilesMatch "^(private|phpinfo).*$">
AuthName "Development"
AuthUserFile /.htpasswd
AuthType basic
Require valid-user
</FilesMatch>

Send Custom Headers

Header set P3P "policyref="http://www.askapache.com/w3c/p3p.xml""
Header set X-Pingback "http://www.askapache.com/xmlrpc.php"
Header set Content-Language "en-US"
Header set Vary "Accept-Encoding"

Blocking based on User-Agent Header

SetEnvIfNoCase ^User-Agent$ .*(craftbot|download|extract|stripper|sucker|ninja|clshttp|webspider|leacher|collector|grabber|webpictures) HTTP_SAFE_BADBOT
SetEnvIfNoCase ^User-Agent$ .*(libwww-perl|aesop_com_spiderman) HTTP_SAFE_BADBOT
Deny from env=HTTP_SAFE_BADBOT

Blocking with RewriteCond

RewriteCond %{HTTP_USER_AGENT} ^.*(craftbot|download|extract|stripper|sucker|ninja|clshttp|webspider|leacher|collector|grabber|webpictures).*$ [NC]
RewriteRule . - [F,L]

.htaccess for mod_php

SetEnv PHPRC /location/todir/containing/phpinifile

.htaccess for php as cgi

AddHandler php-cgi .php .htm
Action php-cgi /cgi-bin/php5.cgi

Shell wrapper for custom php.ini

#!/bin/sh
export PHP_FCGI_CHILDREN=3
exec php5.cgi -c /abs/php5/php.ini

Add values from HTTP Headers

SetEnvIfNoCase ^If-Modified-Since$ "(.+)" HTTP_IF_MODIFIED_SINCE=$1
SetEnvIfNoCase ^If-None-Match$ "(.+)" HTTP_IF_NONE_MATCH=$1
SetEnvIfNoCase ^Cache-Control$ "(.+)" HTTP_CACHE_CONTROL=$1
SetEnvIfNoCase ^Connection$ "(.+)" HTTP_CONNECTION=$1
SetEnvIfNoCase ^Keep-Alive$ "(.+)" HTTP_KEEP_ALIVE=$1
SetEnvIfNoCase ^Authorization$ "(.+)" HTTP_AUTHORIZATION=$1
SetEnvIfNoCase ^Cookie$ "(.+)" HTTP_MY_COOKIE=$1

Stop hotlinking

RewriteCond %{HTTP_REFERER} !^$
RewriteCond %{HTTP_REFERER} !^http://(www\.)?askapache\.com/.*$ [NC]
RewriteRule \.(gif|jpg|swf|flv|png)$ http://www.askapache.com/feed.gif [R=302,L]

Turn logging off for IP

SecFilterSelective REMOTE_ADDR "208\.113\.183\.103" "nolog,noauditlog,pass"

Turn logging on for IP

SecFilterSelective REMOTE_ADDR "!^208\.113\.183\.103" "nolog,noauditlog,pass"
SecFilterSelective REMOTE_ADDR "208\.113\.183\.103" "log,auditlog,pass"

Example .htaccess Files

Here are some samples and examples taken from different .htaccess files I've used over the years. Specific solutions are farther down on this page and throughout the site.

# Set the Time Zone of your Server
SetEnv TZ America/Indianapolis
 
# ServerAdmin:  This address appears on some server-generated pages, such as error documents.
SetEnv SERVER_ADMIN webmaster@askapache.com
 
# Possible values for the Options directive are "None", "All", or any combination of:
#  Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews
Options -ExecCGI -MultiViews -Includes -Indexes FollowSymLinks
 
# DirectoryIndex: sets the file that Apache will serve if a directory is requested.
DirectoryIndex index.html index.php /index.php
 
# Action lets you define media types that will execute a script whenever
# a matching file is called. This eliminates the need for repeated URL
# pathnames for oft-used CGI file processors.
# Format: Action media/type /cgi-script/location
# Format: Action handler-name /cgi-script/location
#
Action php5-cgi /bin/php.cgi
 
# AddHandler allows you to map certain file extensions to "handlers":
# actions unrelated to filetype. These can be either built into the server
# or added with the Action directive (see below)
#
# To use CGI scripts outside of ScriptAliased directories:
# (You will also need to add "ExecCGI" to the "Options" directive.)
#
AddHandler php-cgi .php .inc
 
# Commonly used filename extensions to character sets.
AddDefaultCharset UTF-8
 
# AddType allows you to add to or override the MIME configuration
AddType 'application/rdf+xml; charset=UTF-8' .rdf
AddType 'application/xhtml+xml; charset=UTF-8' .xhtml
AddType 'application/xhtml+xml; charset=UTF-8' .xhtml.gz
AddType 'text/html; charset=UTF-8' .html
AddType 'text/html; charset=UTF-8' .html.gz
AddType application/octet-stream .rar .chm .bz2 .tgz .msi .pdf .exe
AddType application/vnd.ms-excel .csv
AddType application/x-httpd-php-source .phps
AddType application/x-pilot .prc .pdb
AddType application/x-shockwave-flash .swf
AddType application/xrds+xml .xrdf
AddType text/plain .ini .sh .bsh .bash .awk .nawk .gawk .csh .var .c .in .h .asc .md5 .sha .sha1
AddType video/x-flv .flv
 
# AddEncoding allows you to have certain browsers uncompress information on the fly. Note: Not all browsers support this.
AddEncoding x-compress .Z
AddEncoding x-gzip .gz .tgz
 
# DefaultType: the default MIME type the server will use for a document.
DefaultType text/html
 
# Optionally add a line containing the server version and virtual host
# name to server-generated pages (internal error documents, FTP directory
# listings, mod_status and mod_info output etc., but not CGI generated
# documents or custom error documents).
# Set to "EMail" to also include a mailto: link to the ServerAdmin.
# Set to one of:  On | Off | EMail
ServerSignature Off
## MAIN DEFAULTS
Options +ExecCGI -Indexes
DirectoryIndex index.html index.htm index.php
DefaultLanguage en-US
AddDefaultCharset UTF-8
ServerSignature Off
 
## ENVIRONMENT VARIABLES
SetEnv PHPRC /webroot/includes
SetEnv TZ America/Indianapolis
 
SetEnv SERVER_ADMIN webmaster@askapache.com
 
## MIME TYPES
AddType video/x-flv .flv
AddType application/x-shockwave-flash .swf
AddType image/x-icon .ico
 
## FORCE FILE TO DOWNLOAD INSTEAD OF APPEAR IN BROWSER
# http://www.htaccesselite.com/addtype-addhandler-action-vf6.html
AddType application/octet-stream .mov .mp3 .zip
 
## ERRORDOCUMENTS
# http://askapache.com/htaccess/apache-status-code-headers-errordocument.html
ErrorDocument 400 /e400/
ErrorDocument 401 /e401/
ErrorDocument 402 /e402/
ErrorDocument 403 /e403/
ErrorDocument 404 /e404/
 
# Handlers be builtin, included in a module, or added with Action directive
# default-handler: default, handles static content (core)
#   send-as-is: Send file with HTTP headers (mod_asis)
#   cgi-script: treat file as CGI script (mod_cgi)
#    imap-file: Parse as an imagemap rule file (mod_imap)
#   server-info: Get server config info (mod_info)
#  server-status: Get server status report (mod_status)
#    type-map: type map file for content negotiation (mod_negotiation)
#  fastcgi-script: treat file as fastcgi script (mod_fastcgi)
#
# http://www.askapache.com/php/custom-phpini-tips-and-tricks.html
 
## PARSE AS CGI
AddHandler cgi-script .cgi .pl .spl
 
## RUN PHP AS APACHE MODULE
AddHandler application/x-httpd-php .php .htm
 
## RUN PHP AS CGI
AddHandler php-cgi .php .htm
 
## CGI PHP WRAPPER FOR CUSTOM PHP.INI
AddHandler phpini-cgi .php .htm
Action phpini-cgi /cgi-bin/php5-custom-ini.cgi
 
## FAST-CGI SETUP WITH PHP-CGI WRAPPER FOR CUSTOM PHP.INI
AddHandler fastcgi-script .fcgi
AddHandler php-cgi .php .htm
Action php-cgi /cgi-bin/php5-wrapper.fcgi
 
## CUSTOM PHP CGI BINARY SETUP
AddHandler php-cgi .php .htm
Action php-cgi /cgi-bin/php.cgi
 
## PROCESS SPECIFIC FILETYPES WITH CGI-SCRIPT
Action image/gif /cgi-bin/img-create.cgi
 
## CREATE CUSTOM HANDLER FOR SPECIFIC FILE EXTENSIONS
AddHandler custom-processor .ssp
Action custom-processor /cgi-bin/myprocessor.cgi
 
### HEADER CACHING
# http://www.askapache.com/htaccess/speed-up-sites-with-htaccess-caching.html
<FilesMatch "\.(flv|gif|jpg|jpeg|png|ico)$">
Header set Cache-Control "max-age=2592000"
</FilesMatch>
<FilesMatch "\.(js|css|pdf|swf)$">
Header set Cache-Control "max-age=604800"
</FilesMatch>
<FilesMatch "\.(html|htm|txt)$">
Header set Cache-Control "max-age=600"
</FilesMatch>
<FilesMatch "\.(pl|php|cgi|spl|scgi|fcgi)$">
Header unset Cache-Control
</FilesMatch>
 
## ALTERNATE EXPIRES CACHING
# htaccesselite.com/d/use-htaccess-to-speed-up-your-site-discussion-vt67.html
ExpiresActive On
ExpiresDefault A604800
ExpiresByType image/x-icon A2592000
ExpiresByType application/x-javascript A2592000
ExpiresByType text/css A2592000
ExpiresByType text/html A300
 
<FilesMatch "\.(pl|php|cgi|spl|scgi|fcgi)$">
ExpiresActive Off
</FilesMatch>
 
## META HTTP-EQUIV REPLACEMENTS
<FilesMatch "\.(html|htm|php)$">
Header set imagetoolbar "no"
</FilesMatch>

Here are some default MOD_REWRITE code examples.

## REWRITE DEFAULTS
RewriteEngine On
RewriteBase /
 
## REQUIRE SUBDOMAIN
RewriteCond %{HTTP_HOST} !^$
RewriteCond %{HTTP_HOST} !^subdomain\.askapache\.com$ [NC]
RewriteRule ^/(.*)$ http://subdomain.askapache.com/$1 [L,R=301]
 
## SEO REWRITES
RewriteRule ^(.*)/ve/(.*)$ $1/voluntary-employee/$2 [L,R=301]
RewriteRule ^(.*)/hsa/(.*)$ $1/health-saving-account/$2 [L,R=301]
 
## WORDPRESS
RewriteCond %{REQUEST_FILENAME} !-f  # Existing File
RewriteCond %{REQUEST_FILENAME} !-d  # Existing Directory
RewriteRule . /index.php [L]
 
## ALTERNATIVE ANTI-HOTLINKING
RewriteCond %{HTTP_REFERER} !^$
RewriteCond %{HTTP_REFERER} !^http://(subdomain\.)?askapache\.com/.*$ [NC]
RewriteRule ^.*\.(bmp|tif|gif|jpg|jpeg|jpe|png)$ - [F]
 
## REDIRECT HOTLINKERS
RewriteCond %{HTTP_REFERER} !^$
RewriteCond %{HTTP_REFERER} !^http://(subdomain\.)?askapache\.com/.*$ [NC]
RewriteRule ^.*\.(bmp|tif|gif|jpg|jpeg|jpe|png)$ http://google.com [R]
 
## DENY REQUEST BASED ON REQUEST METHOD
RewriteCond %{REQUEST_METHOD} ^(TRACE|TRACK|OPTIONS|HEAD)$ [NC]
RewriteRule ^.*$ - [F]
 
## REDIRECT UPLOADS
RewriteCond %{REQUEST_METHOD} ^(PUT|POST)$ [NC]
RewriteRule ^(.*)$ /cgi-bin/form-upload-processor.cgi?p=$1 [L,QSA]
 
## REQUIRE SSL EVEN WHEN MOD_SSL IS NOT LOADED
RewriteCond %{HTTPS} !=on [NC]
RewriteRule ^.*$ https://%{SERVER_NAME}%{REQUEST_URI} [R,L]
 
### ALTERNATATIVE TO USING ERRORDOCUMENT
# http://www.htaccesselite.com/d/htaccess-errordocument-examples-vt11.html
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^.*$ /error.php [L]
 
## SEO REDIRECTS
Redirect 301 /2006/oldfile.html http://subdomain.askapache.com/newfile.html
RedirectMatch 301 /o/(.*)$ http://subdomain.askapache.com/s/dl/$1

Examples of protecting your files and securing with password protection.

#
# Require (user|group|valid-user) (username|groupname)
#
## BASIC PASSWORD PROTECTION
AuthType basic
AuthName "prompt"
AuthUserFile /.htpasswd
AuthGroupFile /dev/null
Require valid-user
 
## ALLOW FROM IP OR VALID PASSWORD
Require valid-user
Allow from 192.168.1.23
Satisfy Any
 
## PROTECT FILES
<FilesMatch "\.(htaccess|htpasswd|ini|phps|fla|psd|log|sh)$">
Order Allow,Deny
Deny from all
</FilesMatch>
 
## PREVENT HOTLINKING
SetEnvIfNoCase Referer "^http://subdomain.askapache.com/" good
SetEnvIfNoCase Referer "^$" good
<FilesMatch "\.(png|jpg|jpeg|gif|bmp|swf|flv)$">
Order Deny,Allow
Deny from all
Allow from env=good
ErrorDocument 403 http://www.google.com/intl/en_ALL/images/logo.gif
ErrorDocument 403 /images/you_bad_hotlinker.gif
</FilesMatch>
 
## LIMIT UPLOAD FILE SIZE TO PROTECT AGAINST DOS ATTACK
#bytes, 0-2147483647(2GB)
LimitRequestBody 10240000
 
## MOST SECURE WAY TO REQUIRE SSL
# http://www.askapache.com/htaccess/apache-ssl-in-htaccess-examples.html
SSLOptions +StrictRequire
SSLRequireSSL
SSLRequire %{HTTP_HOST} eq "askapache.com"
ErrorDocument 403 https://askapache.com
 
## COMBINED DEVELOPER HTACCESS CODE-USE THIS
<FilesMatch "\.(flv|gif|jpg|jpeg|png|ico|js|css|pdf|swf|html|htm|txt)$">
Header set Cache-Control "max-age=5"
</FilesMatch>
AuthType basic
AuthName "Ooops! Temporarily Under Construction..."
AuthUserFile /.htpasswd
AuthGroupFile /dev/null
Require valid-user      # password prompt for everyone else
Order Deny,Allow
Deny from all
Allow from 192.168.64.5   # Your, the developers IP address
Allow from w3.org      # css/xhtml check jigsaw.w3.org/css-validator/
Allow from googlebot.com   # Allows google to crawl your pages
Satisfy Any        # no password required if host/ip is Allowed
 
## DONT HAVE TO EMPTY CACHE OR RELOAD TO SEE CHANGES
ExpiresDefault A5 #If using mod_expires
<FilesMatch "\.(flv|gif|jpg|jpeg|png|ico|js|css|pdf|swf|html|htm|txt)$">
Header set Cache-Control "max-age=5"
</FilesMatch>
 
## ALLOW ACCESS WITH PASSWORD OR NO PASSWORD FOR SPECIFIC IP/HOSTS
AuthType basic
AuthName "Ooops! Temporarily Under Construction..."
AuthUserFile /.htpasswd
AuthGroupFile /dev/null
Require valid-user      # password prompt for everyone else
Order Deny,Allow
Deny from all
Allow from 192.168.64.5   # Your, the developers IP address
Allow from w3.org      # css/xhtml check jigsaw.w3.org/css-validator/
Allow from googlebot.com   # Allows google to crawl your pages
Satisfy Any        # no password required if host/ip is Allowed

Advanced Mod_Rewrites

Here are some specific htaccess examples taken mostly from my WordPress Password Protection plugin, which does alot more than password protection as you will see from the following mod_rewrite examples. These are a few of the mod_rewrite uses that BlogSecurity declared pushed the boundaries of Mod_Rewrite! Some of these snippets are quite exotic and unlike anything you may have seen before, also only for those who understand them as they can kill a website pretty quick.

Directory Protection

Enable the DirectoryIndex Protection, preventing directory index listings and defaulting. [Disable]

Options -Indexes
DirectoryIndex index.html index.php /index.php

Password Protect wp-login.php

Requires a valid user/pass to access the login page[401]

<Files wp-login.php>
Order Deny,Allow
Deny from All
Satisfy Any
AuthName "Protected By AskApache"
AuthUserFile /home/askapache.com/.htpasswda1
AuthType Basic
Require valid-user
</Files>

Password Protect wp-admin

Requires a valid user/pass to access any non-static (css, js, images) file in this directory.[401]

Options -ExecCGI -Indexes +FollowSymLinks -Includes
DirectoryIndex index.php /index.php
Order Deny,Allow
Deny from All
Satisfy Any
AuthName "Protected By AskApache"
AuthUserFile /home/askapache.com/.htpasswda1
AuthType Basic
Require valid-user
<FilesMatch "\.(ico|pdf|flv|jpg|jpeg|mp3|mpg|mp4|mov|wav|wmv|png|gif|swf|css|js)$">
Allow from All
</FilesMatch>
<FilesMatch "(async-upload)\.php$">
<IfModule mod_security.c>
SecFilterEngine Off
</IfModule>
Allow from All
</FilesMatch>

Protect wp-content

Denies any Direct request for files ending in .php with a 403 Forbidden.. May break plugins/themes [401]

RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /wp-content/.*$ [NC]
RewriteCond %{REQUEST_FILENAME} !^.+flexible-upload-wp25js.php$
RewriteCond %{REQUEST_FILENAME} ^.+\.(php|html|htm|txt)$
RewriteRule .* - [F,NS,L]

Protect wp-includes

Denies any Direct request for files ending in .php with a 403 Forbidden.. May break plugins/themes [403]

RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /wp-includes/.*$ [NC]
RewriteCond %{THE_REQUEST} !^[A-Z]{3,9}\ /wp-includes/js/.+/.+\ HTTP/ [NC]
RewriteCond %{REQUEST_FILENAME} ^.+\.php$
RewriteRule .* - [F,NS,L]

Common Exploits

Block common exploit requests with 403 Forbidden. These can help alot, may break some plugins. [403]

RewriteCond %{REQUEST_URI} !^/(wp-login.php|wp-admin/|wp-content/plugins/|wp-includes/).* [NC]
RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ ///.*\ HTTP/ [NC,OR]
RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*\?\=?(http|ftp|ssl|https):/.*\ HTTP/ [NC,OR]
RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*\?\?.*\ HTTP/ [NC,OR]
RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*\.(asp|ini|dll).*\ HTTP/ [NC,OR]
RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*\.(htpasswd|htaccess|aahtpasswd).*\ HTTP/ [NC]
RewriteRule .* - [F,NS,L]

Stop Hotlinking

Denies any request for static files (images, css, etc) if referrer is not local site or empty. [403]

RewriteCond %{HTTP_REFERER} !^$
RewriteCond %{REQUEST_URI} !^/(wp-login.php|wp-admin/|wp-content/plugins/|wp-includes/).* [NC]
RewriteCond %{HTTP_REFERER} !^http://www.askapache.com.*$ [NC]
RewriteRule \.(ico|pdf|flv|jpg|jpeg|mp3|mpg|mp4|mov|wav|wmv|png|gif|swf|css|js)$ - [F,NS,L]

Safe Request Methods

Denies any request not using GET,PROPFIND,POST,OPTIONS,PUT,HEAD[403]

RewriteCond %{REQUEST_METHOD} !^(GET|HEAD|POST|PROPFIND|OPTIONS|PUT)$ [NC]
RewriteRule .* - [F,NS,L]

Forbid Proxies

Denies any POST Request using a Proxy Server. Can still access site, but not comment. See Perishable Press [403]

RewriteCond %{REQUEST_METHOD} =POST
RewriteCond %{HTTP:VIA}%{HTTP:FORWARDED}%{HTTP:USERAGENT_VIA}%{HTTP:X_FORWARDED_FOR}%{HTTP:PROXY_CONNECTION} !^$ [OR]
RewriteCond %{HTTP:XPROXY_CONNECTION}%{HTTP:HTTP_PC_REMOTE_ADDR}%{HTTP:HTTP_CLIENT_IP} !^$
RewriteCond %{REQUEST_URI} !^/(wp-login.php|wp-admin/|wp-content/plugins/|wp-includes/).* [NC]
RewriteRule .* - [F,NS,L]

Real wp-comments-post.php

Denies any POST attempt made to a non-existing wp-comments-post.php[403]

RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*/wp-comments-post\.php.*\ HTTP/ [NC]
RewriteRule .* - [F,NS,L]

HTTP PROTOCOL

Denies any badly formed HTTP PROTOCOL in the request, 0.9, 1.0, and 1.1 only[403]

RewriteCond %{THE_REQUEST} !^[A-Z]{3,9}\ .+\ HTTP/(0\.9|1\.0|1\.1) [NC]
RewriteRule .* - [F,NS,L]

SPECIFY CHARACTERS

Denies any request for a url containing characters other than "a-zA-Z0-9.+/-?=&" - REALLY helps but may break your site depending on your links. [403]

RewriteCond %{REQUEST_URI} !^/(wp-login.php|wp-admin/|wp-content/plugins/|wp-includes/).* [NC]
RewriteCond %{THE_REQUEST} !^[A-Z]{3,9}\ [a-zA-Z0-9\.\+_/\-\?\=\&]+\ HTTP/ [NC]
RewriteRule .* - [F,NS,L]

BAD Content Length

Denies any POST request that doesnt have a Content-Length Header[403]

RewriteCond %{REQUEST_METHOD} =POST
RewriteCond %{HTTP:Content-Length} ^$
RewriteCond %{REQUEST_URI} !^/(wp-admin/|wp-content/plugins/|wp-includes/).* [NC]
RewriteRule .* - [F,NS,L]

BAD Content Type

Denies any POST request with a content type other than application/x-www-form-urlencoded|multipart/form-data[403]

RewriteCond %{REQUEST_METHOD} =POST
RewriteCond %{HTTP:Content-Type} !^(application/x-www-form-urlencoded|multipart/form-data.*(boundary.*)?)$ [NC]
RewriteCond %{REQUEST_URI} !^/(wp-login.php|wp-admin/|wp-content/plugins/|wp-includes/).* [NC]
RewriteRule .* - [F,NS,L]

Missing HTTP_HOST

Denies requests that dont contain a HTTP HOST Header.[403]

RewriteCond %{REQUEST_URI} !^/(wp-login.php|wp-admin/|wp-content/plugins/|wp-includes/).* [NC]
RewriteCond %{HTTP_HOST} ^$
RewriteRule .* - [F,NS,L]

Bogus Graphics Exploit

Denies obvious exploit using bogus graphics[403]

RewriteCond %{HTTP:Content-Disposition} \.php [NC]
RewriteCond %{HTTP:Content-Type} image/.+ [NC]
RewriteRule .* - [F,NS,L]

No UserAgent, Not POST

Denies POST requests by blank user-agents. May prevent a small number of visitors from POSTING. [403]

RewriteCond %{REQUEST_METHOD} =POST
RewriteCond %{HTTP_USER_AGENT} ^-?$
RewriteCond %{REQUEST_URI} !^/(wp-login.php|wp-admin/|wp-content/plugins/|wp-includes/).* [NC]
RewriteRule .* - [F,NS,L]

No Referer, No Comment

Denies any comment attempt with a blank HTTP_REFERER field, highly indicative of spam. May prevent some visitors from POSTING. [403]

RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*/wp-comments-post\.php.*\ HTTP/ [NC]
RewriteCond %{HTTP_REFERER} ^-?$
RewriteRule .* - [F,NS,L]

Trackback Spam

Denies obvious trackback spam. See Holy Shmoly! [403]

RewriteCond %{REQUEST_METHOD} =POST
RewriteCond %{HTTP_USER_AGENT} ^.*(opera|mozilla|firefox|msie|safari).*$ [NC]
RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.+/trackback/?\ HTTP/ [NC]
RewriteRule .* - [F,NS,L]

Map all URIs except those corresponding to existing files to a handler

RewriteEngine On
RewriteCond %{DOCUMENT_ROOT}%{REQUEST_URI} !-d
RewriteCond %{DOCUMENT_ROOT}%{REQUEST_URI} !-f
RewriteRule . /script.php

Map any request to a handler

In the case where all URIs should be sent to the same place (including potentially requests for static content) the method to use depends on the type of the handler. For php scripts, use: For other handlers such as php scripts, use:

RewriteEngine On
RewriteCond %{REQUEST_URI} !=/script.php
RewriteRule .* /script.php

And for CGI scripts:

ScriptAliasMatch .* /var/www/script.cgi

Map URIs corresponding to existing files to a handler instead

RewriteEngine On
RewriteCond %{DOCUMENT_ROOT}%{REQUEST_URI} -d [OR]
RewriteCond %{DOCUMENT_ROOT}%{REQUEST_URI} -f
RewriteCond %{REQUEST_URI} !=/script.php
RewriteRule .* /script.php

If the existing files you wish to have handled by your script have a common set of file extensions distinct from that of the hander, you can bypass mod_rewrite and use instead mod_actions. Let's say you want all .html and .tpl files to be dealt with by your script:

Action foo-action /script.php
AddHandler foo-action html tpl

Deny access if var=val contains the string foo.

RewriteCond %{QUERY_STRING} foo
RewriteRule ^/url - [F]

Removing the Query String

RewriteRule ^/url /url?

Adding to the Query String

Keep the existing query string using the Query String Append flag, but add var=val to the end.

RewriteRule ^/url /url?var=val [QSA]

Rewriting For Certain Query Strings

Rewrite URLs like http://askapache.com/url1?var=val to http://askapache.com/url2?var=val but don't rewrite if val isn't present.

RewriteCond %{QUERY_STRING} val
RewriteRule ^/url1 /url2

Modifying the Query String

Change any single instance of val in the query string to other_val when accessing /path. Note that %1 and %2 are back-references to the matched part of the regular expression in the previous RewriteCond.

RewriteCond %{QUERY_STRING} ^(.*)val(.*)$
RewriteRule /path /path?%1other_val%2

Best .htaccess Articles

.htaccess for Webmasters

Mod_Rewrite URL Rewriting

Undocumented techniques and methods will allow you to utilize mod_rewrite at an "expert level" by showing you how to unlock its secrets.

301 Redirects without mod_rewrite

Secure PHP with .htaccess

Locking down your php.ini and php cgi with .htaccessIf you have a php.cgi or php.ini file in your /cgi-bin/ directory or other pub directory, try requesting them from your web browser. If your php.ini shows up or worse you are able to execute your php cgi, you'll need to secure it ASAP. This shows several ways to secure these files, and other interpreters like perl, fastCGI, bash, csh, etc.

.htaccess Cookie Manipulation

Cookie Manipulation in .htaccess with RewriteRuleFresh .htaccess code for you! Check out the Cookie Manipulation and environment variable usage with mod_rewrite! I also included a couple Mod_Security .htaccess examples. Enjoy!

.htaccess Caching

Password Protection and Authentication

Control HTTP Headers

Blocking Spam and bad Bots

Block Bad RobotWant to block a bad robot or web scraper using .htaccess files? Here are 2 methods that illustrate blocking 436 various user-agents. You can block them using either SetEnvIf methods, or by using Rewrite Blocks.

PHP htaccess tips

By using some cool .htaccess tricks we can control PHP to be run as a cgi or a module. If php is run as a cgi then we need to compile it ourselves or use .htaccess to force php to use a local php.ini file. If it is running as a module then we can use various directives supplied by that modules in .htaccess

HTTP to HTTPS Redirects with mod_rewrite

HTTP to HTTPS Redirects with mod_rewriteThis is freaking sweet if you use SSL I promise you! Basically instead of having to check for HTTPS using a RewriteCond %{HTTPS} =on for every redirect that can be either HTTP or HTTPS, I set an environment variable once with the value "http" or "https" if HTTP or HTTPS is being used for that request, and use that env variable in the RewriteRule.

SSL in .htaccess

SetEnvIf and SetEnvIfNoCase in .htaccess

Site Security with .htaccess

chmod .htpasswd files 640, chmod .htaccess 644, php files 600, and chmod files that you really dont want people to see as 400. (NEVER chmod 777, try 766)

Merging Notes

The order of merging is:

  1. <Directory> (except regular expressions) and .htaccess done simultaneously (with .htaccess, if allowed, overriding <Directory>)
  2. <DirectoryMatch> (and <Directory ~>)
  3. <Files> and <FilesMatch> done simultaneously
  4. <Location> and <LocationMatch> done simultaneously

My Favorite .htaccess Links

These are just some of my favorite .htaccess resources. I'm really into doing your own hacking to get knowledge and these links are all great resources in that respect. I'm really interested in new or unusual htaccess solutions or htaccess hacks using .htaccess files, so let me know if you find one.

NCSA HTTPd Tutorials

Robert Hansen
Here's a great Hardening HTAccess part 1, part 2, part 3 article that goes into detail about some of the rarer security applications for .htaccess files.

SAMAXES
Some very detailed and helpful .htaccess articles, such as the ".htaccess - gzip and cache your site for faster loading and bandwidth saving."

PerishablePress
Stupid .htaccess tricks is probably the best explanation online for many of the best .htaccess solutions, including many from this page. Unlike me they are fantastic writers, even for technical stuff they are very readable, so its a good blog to kick back on and read. They also have a fantastic article detailing how to block/deny specific requests using mod_rewrite.

BlogSecurity
Mostly a site for... blog security (which is really any web-app security) this blog has a few really impressive articles full of solid information for Hardening WordPress with .htaccess among more advanced topics that can be challenging but effective. This is a good site to subscribe to their feed, they publish plugin exploits and wordpress core vulnerabilities quite a bit.

Check-These
Oldschool security/unix dude with some incredibly detailed mod_rewrite tutorials, helped me the most when I first got into this, and a great guy too. See: Basic Mod_Rewrite Guide, and Advanced Mod_Rewrite Tutorial

Reaper-X
Alot of .htaccess tutorials and code. See: Hardening WordPress with Mod Rewrite and htaccess

jdMorgan
jdMorgan is the Moderator of the Apache Forum at WebmasterWorld, a great place for answers. In my experience he can answer any tough question pertaining to advanced .htaccess usage, haven't seen him stumped yet.

The W3C
Setting Charset in .htaccess is very informative.

Holy Shmoly!
A great blogger with analysis of attacks and spam. See: More ways to stop spammers and unwanted traffic.

Apache Week
A partnership with Red Hat back in the 90's that produced some excellent documentation.

Corz
Here's a resource that I consider to have some of the most creative and ingenious ideas for .htaccess files, although the author is somewhat of a character ;) Its a trip trying to navigate around the site, a fun trip. Its like nothing I've ever seen. There are only a few articles on the site, but the htaccess articles are very original and well-worth a look. See: htaccess tricks and tips.


Htaccess Directives

This is an AskApache.com exclusive you won't find this anywhere else.

Directory, DirectoryMatch, Files, FilesMatch, IfDefine, IfVersion, IfModule, Limit, LimitExcept, Location, LocationMatch, Proxy, ProxyMatch, VirtualHost, AcceptMutex, AcceptPathInfo, AccessFileName, Action, AddCharset, AddDefaultCharset, AddDescription, AddEncoding, AddHandler, AddInputFilter, AddLanguage, AddOutputFilter, AddOutputFilterByType, AddType, Alias, AliasMatch, AllowCONNECT, AllowOverride, Anonymous, Anonymous_Authoritative, Anonymous_LogEmail, Anonymous_MustGiveEmail, Anonymous_NoUserId, Anonymous_VerifyEmail, AuthAuthoritative, AuthDBMAuthoritative, AuthDBMGroupFile, AuthDBMType, AuthDBMUserFile, AuthDigestAlgorithm, AuthDigestDomain, AuthDigestFile, AuthDigestGroupFile, AuthDigestNcCheck, AuthDigestNonceFormat, AuthDigestNonceLifetime, AuthDigestQop, AuthDigestShmemSize, AuthGroupFile, AuthName, AuthType, AuthUserFile, BS2000Account, BrowserMatch, BrowserMatchNoCase, CacheNegotiatedDocs, CharsetDefault, CharsetOptions, CharsetSourceEnc, CheckSpelling, ContentDigest, CookieDomain, CookieExpires, CookieName, CookieStyle, CookieTracking, CoreDumpDirectory, DAV, DAVDepthInfinity, DAVMinTimeout, DefaultIcon, DefaultLanguage, DefaultType, DocumentRoot, ErrorDocument, ErrorLog, ExtFilterDefine, ExtFilterOptions, FancyIndexing, FileETag, ForceLanguagePriority, ForceType, GprofDir, Header, HeaderName, HostnameLookups, IdentityCheck, ImapBase, ImapDefault, ImapMenu, Include, IndexIgnore, LanguagePriority, LimitRequestBody, LimitRequestFields, LimitRequestFieldsize, LimitRequestLine, LimitXMLRequestBody, LockFile, LogLevel, MaxRequestsPerChild, MultiviewsMatch, NameVirtualHost, NoProxy, Options, PassEnv, PidFile, Port, ProxyBlock, ProxyDomain, ProxyErrorOverride, ProxyIOBufferSize, ProxyMaxForwards, ProxyPass, ProxyPassReverse, ProxyPreserveHost, ProxyReceiveBufferSize, ProxyRemote, ProxyRemoteMatch, ProxyRequests, ProxyTimeout, ProxyVia, RLimitCPU, RLimitMEM, RLimitNPROC, ReadmeName, Redirect, RedirectMatch, RedirectPermanent, RedirectTemp, RemoveCharset, RemoveEncoding, RemoveHandler, RemoveInputFilter, RemoveLanguage, RemoveOutputFilter, RemoveType, RequestHeader, Require, RewriteCond, RewriteRule, SSIEndTag, SSIErrorMsg, SSIStartTag, SSITimeFormat, SSIUndefinedEcho, Satisfy, ScoreBoardFile, Script, ScriptAlias, ScriptAliasMatch, ScriptInterpreterSource, ServerAdmin, ServerAlias, ServerName, ServerPath, ServerRoot, ServerSignature, ServerTokens, SetEnv, SetEnvIf, SetEnvIfNoCase, SetHandler, SetInputFilter, SetOutputFilter, Timeout, TypesConfig, UnsetEnv, UseCanonicalName, XBitHack, allow, deny, order, CGIMapExtension, EnableMMAP, ISAPIAppendLogToErrors, ISAPIAppendLogToQuery, ISAPICacheFile, ISAPIFakeAsync, ISAPILogNotSupported, ISAPIReadAheadBuffer, SSLLog, SSLLogLevel, MaxMemFree, ModMimeUsePathInfo, EnableSendfile, ProxyBadHeader, AllowEncodedSlashes, LimitInternalRecursion, EnableExceptionHook, TraceEnable, ProxyFtpDirCharset, AuthBasicAuthoritative, AuthBasicProvider, AuthDefaultAuthoritative, AuthDigestProvider, AuthLDAPAuthzEnabled, AuthLDAPBindDN, AuthLDAPBindPassword, AuthLDAPCharsetConfig, AuthLDAPCompareDNOnServer, AuthLDAPDereferenceAliases, AuthLDAPGroupAttribute, AuthLDAPGroupAttributeIsDN, AuthLDAPRemoteUserIsDN, AuthLDAPURL, AuthzDBMAuthoritative, AuthzDBMType, AuthzDefaultAuthoritative, AuthzGroupFileAuthoritative, AuthzLDAPAuthoritative, AuthzOwnerAuthoritative, AuthzUserAuthoritative, BalancerMember, DAVGenericLockDB, FilterChain, FilterDeclare, FilterProtocol, FilterProvider, FilterTrace, IdentityCheckTimeout, IndexStyleSheet, ProxyPassReverseCookieDomain, ProxyPassReverseCookiePath, ProxySet, ProxyStatus, ThreadStackSize, AcceptFilter, Protocol, AuthDBDUserPWQuery, AuthDBDUserRealmQuery, UseCanonicalPhysicalPort, CheckCaseOnly, AuthLDAPRemoteUserAttribute, ProxyPassMatch, SSIAccessEnable, Substitute, ProxyPassInterpolateEnv


Htaccess Modules

Here are most of the modules that come with Apache. Each one can have new commands that can be used in .htaccess file scopes.

mod_actions, mod_alias, mod_asis, mod_auth_basic, mod_auth_digest, mod_authn_anon, mod_authn_dbd, mod_authn_dbm, mod_authn_default, mod_authn_file, mod_authz_dbm, mod_authz_default, mod_authz_groupfile, mod_authz_host, mod_authz_owner, mod_authz_user, mod_autoindex, mod_cache, mod_cern_meta, mod_cgi, mod_dav, mod_dav_fs, mod_dbd, mod_deflate, mod_dir, mod_disk_cache, mod_dumpio, mod_env, mod_expires, mod_ext_filter, mod_file_cache, mod_filter, mod_headers, mod_ident, mod_imagemap, mod_include, mod_info, mod_log_config, mod_log_forensic, mod_logio, mod_mem_cache, mod_mime, mod_mime_magic, mod_negotiation, mod_proxy, mod_proxy_ajp, mod_proxy_balancer, mod_proxy_connect, mod_proxy_ftp, mod_proxy_http, mod_rewrite, mod_setenvif, mod_speling, mod_ssl, mod_status, mod_substitute, mod_unique_id, mod_userdir, mod_usertrack, mod_version, mod_vhost_alias


Htaccess Software

Apache HTTP Server comes with the following programs.

httpd
Apache hypertext transfer protocol server
apachectl
Apache HTTP server control interface
ab
Apache HTTP server benchmarking tool
apxs
APache eXtenSion tool
dbmmanage
Create and update user authentication files in DBM format for basic authentication
fcgistarter
Start a FastCGI program
htcacheclean
Clean up the disk cache
htdigest
Create and update user authentication files for digest authentication
htdbm
Manipulate DBM password databases.
htpasswd
Create and update user authentication files for basic authentication
httxt2dbm
Create dbm files for use with RewriteMap
logresolve
Resolve hostnames for IP-addresses in Apache logfiles
log_server_status
Periodically log the server's status
rotatelogs
Rotate Apache logs without having to kill the server
split-logfile
Split a multi-vhost logfile into per-host logfiles
suexec
Switch User For Exec

Technical Look at .htaccess

Source: Apache API notes

Per-directory configuration structures

Let's look out how all of this plays out in mod_mime.c, which defines the file typing handler which emulates the NCSA server's behavior of determining file types from suffixes. What we'll be looking at, here, is the code which implements the AddType and AddEncoding commands. These commands can appear in .htaccess files, so they must be handled in the module's private per-directory data, which in fact, consists of two separate tables for MIME types and encoding information, and is declared as follows:

table *forced_types;      /* Additional AddTyped stuff */
table *encoding_types;    /* Added with AddEncoding... */
mime_dir_config;

When the server is reading a configuration file, or <Directory> section, which includes one of the MIME module's commands, it needs to create a mime_dir_config structure, so those commands have something to act on. It does this by invoking the function it finds in the module's `create per-dir config slot', with two arguments: the name of the directory to which this configuration information applies (or NULL for srm.conf), and a pointer to a resource pool in which the allocation should happen.

(If we are reading a .htaccess file, that resource pool is the per-request resource pool for the request; otherwise it is a resource pool which is used for configuration data, and cleared on restarts. Either way, it is important for the structure being created to vanish when the pool is cleared, by registering a cleanup on the pool if necessary).

For the MIME module, the per-dir config creation function just ap_pallocs the structure above, and a creates a couple of tables to fill it. That looks like this:

void *create_mime_dir_config (pool *p, char *dummy)
mime_dir_config *new = (mime_dir_config *) ap_palloc (p, sizeof(mime_dir_config));
 
new->forced_types = ap_make_table (p, 4);
new->encoding_types = ap_make_table (p, 4);

Now, suppose we've just read in a .htaccess file. We already have the per-directory configuration structure for the next directory up in the hierarchy. If the .htaccess file we just read in didn't have any AddType or AddEncoding commands, its per-directory config structure for the MIME module is still valid, and we can just use it. Otherwise, we need to merge the two structures somehow.

To do that, the server invokes the module's per-directory config merge function, if one is present. That function takes three arguments: the two structures being merged, and a resource pool in which to allocate the result. For the MIME module, all that needs to be done is overlay the tables from the new per-directory config structure with those from the parent:

void *merge_mime_dir_configs (pool *p, void *parent_dirv, void *subdirv)
mime_dir_config *parent_dir = (mime_dir_config *)parent_dirv;
mime_dir_config *subdir = (mime_dir_config *)subdirv;
mime_dir_config *new =  (mime_dir_config *)ap_palloc (p, sizeof(mime_dir_config));
new->forced_types = ap_overlay_tables (p, subdir->forced_types, parent_dir->forced_types);
new->encoding_types = ap_overlay_tables (p, subdir->encoding_types, parent_dir->encoding_types);

As a note --- if there is no per-directory merge function present, the server will just use the subdirectory's configuration info, and ignore the parent's. For some modules, that works just fine (e.g., for the includes module, whose per-directory configuration information consists solely of the state of the XBITHACK), and for those modules, you can just not declare one, and leave the corresponding structure slot in the module itself NULL.

Command handling

Now that we have these structures, we need to be able to figure out how to fill them. That involves processing the actual AddType and AddEncoding commands. To find commands, the server looks in the module's command table. That table contains information on how many arguments the commands take, and in what formats, where it is permitted, and so forth. That information is sufficient to allow the server to invoke most command-handling functions with pre-parsed arguments. Without further ado, let's look at the AddType command handler, which looks like this (the AddEncoding command looks basically the same, and won't be shown here):

char *add_type(cmd_parms *cmd, mime_dir_config *m, char *ct, char *ext)
if (*ext == '.') ++ext;
ap_table_set (m->forced_types, ext, ct);

This command handler is unusually simple. As you can see, it takes four arguments, two of which are pre-parsed arguments, the third being the per-directory configuration structure for the module in question, and the fourth being a pointer to a cmd_parms structure. That structure contains a bunch of arguments which are frequently of use to some, but not all, commands, including a resource pool (from which memory can be allocated, and to which cleanups should be tied), and the (virtual) server being configured, from which the module's per-server configuration data can be obtained if required.

Another way in which this particular command handler is unusually simple is that there are no error conditions which it can encounter. If there were, it could return an error message instead of NULL; this causes an error to be printed out on the server's stderr, followed by a quick exit, if it is in the main config files; for a .htaccess file, the syntax error is logged in the server error log (along with an indication of where it came from), and the request is bounced with a server error response (HTTP error status, code 500).

The MIME module's command table has entries for these commands, which look like this:

command_rec mime_cmds[] =
{ "AddType", add_type, NULL, OR_FILEINFO, TAKE2, "a mime type followed by a file extension" },
{ "AddEncoding", add_encoding, NULL, OR_FILEINFO, TAKE2, "an encoding (e.g., gzip), followed by a file extension" },

Here's a taste of that famous Apache source code that builds the directives allowed in .htaccess file context, the key that tells whether its enabled in .htaccess context is the DIR_CMD_PERMS and then the OR_FILEINFO, which means a directive is enabled dependent on the AllowOverride directive that is only allowed in the main config. First Apache 1.3.0, then Apache 2.2.10

mod_autoindex
AddIcon, add_icon, BY_PATH, DIR_CMD_PERMS, an icon URL followed by one or more filenames
AddIconByType, add_icon, BY_TYPE, DIR_CMD_PERMS, an icon URL followed by one or more MIME types
AddIconByEncoding, add_icon, BY_ENCODING, DIR_CMD_PERMS, an icon URL followed by one or more content encodings
AddAlt, add_alt, BY_PATH, DIR_CMD_PERMS, alternate descriptive text followed by one or more filenames
AddAltByType, add_alt, BY_TYPE, DIR_CMD_PERMS, alternate descriptive text followed by one or more MIME types
AddAltByEncoding, add_alt, BY_ENCODING, DIR_CMD_PERMS, alternate descriptive text followed by one or more content encodings
IndexOptions, add_opts, DIR_CMD_PERMS, RAW_ARGS, one or more index options
IndexIgnore, add_ignore, DIR_CMD_PERMS, ITERATE, one or more file extensions
AddDescription, add_desc, BY_PATH, DIR_CMD_PERMS, Descriptive text followed by one or more filenames
HeaderName, add_header, DIR_CMD_PERMS, TAKE1, a filename
ReadmeName, add_readme, DIR_CMD_PERMS, TAKE1, a filename
FancyIndexing, fancy_indexing, DIR_CMD_PERMS, FLAG, Limited to 'on' or 'off' (superseded by IndexOptions FancyIndexing)
DefaultIcon, ap_set_string_slot, (void *) XtOffsetOf(autoindex_config_rec, default_icon), DIR_CMD_PERMS, TAKE1, an icon URL
mod_rewrite
// mod_rewrite
RewriteEngine, cmd_rewriteengine, OR_FILEINFO, On or Off to enable or disable (default)
RewriteOptions, cmd_rewriteoptions, OR_FILEINFO, List of option strings to set
RewriteBase, cmd_rewritebase, OR_FILEINFO, the base URL of the per-directory context
RewriteCond, cmd_rewritecond, OR_FILEINFO, an input string and a to be applied regexp-pattern
RewriteRule, cmd_rewriterule, OR_FILEINFO, an URL-applied regexp-pattern and a substitution URL
RewriteMap, cmd_rewritemap, RSRC_CONF, a mapname and a filename
RewriteLock, cmd_rewritelock, RSRC_CONF, the filename of a lockfile used for inter-process synchronization
RewriteLog, cmd_rewritelog, RSRC_CONF, the filename of the rewriting logfile
RewriteLogLevel, cmd_rewriteloglevel, RSRC_CONF, the level of the rewriting logfile verbosity (0=none, 1=std, .., 9=max)
RewriteLog, fake_rewritelog, RSRC_CONF, [DISABLED] the filename of the rewriting logfile
RewriteLogLevel, fake_rewritelog, RSRC_CONF, [DISABLED] the level of the rewriting logfile verbosity

The entries in these tables are:

  • The name of the command
  • The function which handles it a (void *) pointer, which is passed in the cmd_parms structure to the command handler --- this is useful in case many similar commands are handled by the same function.
  • A bit mask indicating where the command may appear. There are mask bits corresponding to each AllowOverride option, and an additional mask bit, RSRC_CONF, indicating that the command may appear in the server's own config files, but not in any .htaccess file.
  • A flag indicating how many arguments the command handler wants pre-parsed, and how they should be passed in. TAKE2 indicates two pre-parsed arguments. Other options are TAKE1, which indicates one pre-parsed argument, FLAG, which indicates that the argument should be On or Off, and is passed in as a boolean flag, RAW_ARGS, which causes the server to give the command the raw, unparsed arguments (everything but the command name itself). There is also ITERATE, which means that the handler looks the same as TAKE1, but that if multiple arguments are present, it should be called multiple times, and finally ITERATE2, which indicates that the command handler looks like a TAKE2, but if more arguments are present, then it should be called multiple times, holding the first argument constant.
  • Finally, we have a string which describes the arguments that should be present. If the arguments in the actual config file are not as required, this string will be used to help give a more specific error message. (You can safely leave this NULL).

Finally, having set this all up, we have to use it. This is ultimately done in the module's handlers, specifically for its file-typing handler, which looks more or less like this; note that the per-directory configuration structure is extracted from the request_rec's per-directory configuration vector by using the ap_get_module_config function.

Side notes --- per-server configuration, virtual servers, etc.

The basic ideas behind per-server module configuration are basically the same as those for per-directory configuration; there is a creation function and a merge function, the latter being invoked where a virtual server has partially overridden the base server configuration, and a combined structure must be computed. (As with per-directory configuration, the default if no merge function is specified, and a module is configured in some virtual server, is that the base configuration is simply ignored).

The only substantial difference is that when a command needs to configure the per-server private module data, it needs to go to the cmd_parms data to get at it. Here's an example, from the alias module, which also indicates how a syntax error can be returned (note that the per-directory configuration argument to the command handler is declared as a dummy, since the module doesn't actually have per-directory config data):

Network Hardware Certification

No doubt Cisco is way ahead then other network hardware and certification providers. Most of fresh IT graduates are going for CCNA 640-802 exam in order to start their career in the field of networking. While on the other hand, those who are interested to be hardware technician are attempting CompTIA sy0-301 Exam.

Litespeed Htaccess support

Unlike other lightweight web servers, Apache compatible per-directory configuration overridden is fully supported by LiteSpeed Web Server. With .htacess you can change configurations for any directory under document root on-the-fly, which in most cases is a mandatory feature in shared hosting environment. It is worth noting that enabling .htaccess support in LiteSpeed Web Server will not degrade server's performance, comparing to Apache's 40% drop in performance.

Continue Reading Page 2

THE Ultimate Htaccess… originally appeared on AskApache.com

The post THE Ultimate Htaccess appeared first on AskApache.


Bash Script to Create index.html of Dir Listing

$
0
0

AskApache.com

Bash Script to Create index.html of Dir ListingIf you use Apache to auto-generate directory index listings of files/dirs, using the IndexOptions directive, like at http://gnu.askapache.com/, and you have a large number of files and directories in the root directory and/or slow IO speed, then generating the index could take Apache over a minute!

Fix for index Listings

I fixed that by writing a bash shell script, which is run by cron, in this case immediately after every rsync. This creates a static index.html file formatted exactly as the simple apache-generated one. Now apache serves the single index.html, which can be cached. Served in less than a second now.

Htaccess for Indexes

Here is the basic .htaccess to setup in the root directory.

Options SymLinksIfOwnerMatch Indexes
 
DirectoryIndex index.html
IndexOptions FancyIndexing TrackModified IgnoreClient ScanHTMLTitles SuppressRules VersionSort IgnoreCase NameWidth=* DescriptionWidth=*

Shell Script to create index.html

The source is below, or download it: mirror index.html creator.

#!/bin/bash
# Updated: Wed Apr 10 21:04:12 2013 by webmaster@askapache
# @ http://uploads.askapache.com/2013/04/gnu-mirror-index-creator.txt
# Copyright (C) 2013 Free Software Foundation, Inc.
#
#   This program is free software: you can redistribute it and/or modify
#   it under the terms of the GNU General Public License as published by
#   the Free Software Foundation, either version 3 of the License, or
#   (at your option) any later version.
#
#   This program is distributed in the hope that it will be useful,
#   but WITHOUT ANY WARRANTY; without even the implied warranty of
#   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
#   GNU General Public License for more details.
#
#   You should have received a copy of the GNU General Public License
#   along with this program.  If not, see <http://www.gnu.org/licenses/>.
 
function create_gnu_index ()
{
    # call it right or die
    [[ $# != 3 ]] && echo "bad args. do: $FUNCNAME '/DOCUMENT_ROOT/' '/' 'gnu.askapache.com'" && exit 2
  
    # D is the doc_root containing the site
    local L= D="$1" SUBDIR="$2" DOMAIN="$3" F=
 
    # The index.html file to create
    F="${D}index.html"
 
    # if dir doesnt exist, create it
    [[ -d $D ]] || mkdir -p $D;
 
    # cd into dir or die
    cd $D || exit 2;
 
    # touch index.html and check if writable or die
    touch $F && test -w $F || exit 2;
 
    # remove empty directories, they dont need to be there and slow things down if they are
    find . -maxdepth 1 -type d -empty -exec rm -rf {} \;
 
    # start of total output for saving as index.html
    (
 
        # print the html header
        echo '<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">';
        echo "<html><head><title>Index of http://${DOMAIN}${SUBDIR}</title></head>";
        echo "<body><h1>Index of ${SUBDIR}</h1><pre>      Name                                        Last modified      Size";
 
        # start of content output
        (
            # change IFS locally within subshell so the for loop saves line correctly to L var
            IFS=$'\n';
 
            # pretty sweet, will mimick the normal apache output
            for L in $(find -L . -mount -depth -maxdepth 1 -type f ! -name 'index.html' -printf "      <a href=\"%f\">%-44f@_@%Td-%Tb-%TY %Tk:%TM  @%f@\n"|sort|sed 's,\([\ ]\+\)@_@,</a>\1,g');
            do
                # file
                F=$(sed -e 's,^.*@\([^@]\+\)@.*$,\1,g'<<<"$L");
 
                # file with file size
                F=$(du -bh $F | cut -f1);
 
                # output with correct format
                sed -e 's,\ @.*$, '"$F"',g'<<<"$L";
            done;
        )
 
        # now output a list of all directories in this dir (maxdepth 1) other than '.' outputting in a sorted manner exactly like apache
        find -L . -mount -depth -maxdepth 1 -type d ! -name '.' -printf "      <a href=\"%f\">%-43f@_@%Td-%Tb-%TY %Tk:%TM  -\n"|sort -d|sed 's,\([\ ]\+\)@_@,/</a>\1,g'
 
        # print the footer html
        echo "</pre><address>Apache Server at ${DOMAIN}</address></body></html>";
 
    # finally save the output of the subshell to index.html
    )  > $F;
 
}
 
# start the run ( use function so everything is local and contained )
#    $1 is absolute document_root with trailing '/'
#    $2 is subdir like '/subdir/' if thats the web root, '/' if no subdir
#    $3 is the domain 'subdomain.domain.tld'
create_gnu_index "${HOME}/sites/gnu.askapache.com/htdocs/" "/" "gnu.askapache.com"
 
# takes about 1-5 seconds to complete
exit

Bash Script to Create index.html of Dir Listing… originally appeared on AskApache.com

The post Bash Script to Create index.html of Dir Listing appeared first on AskApache.

Bash Functions and Aliases for Traps, Kills, and Signals

$
0
0

AskApache.com

Download: trapstuff.txt

See also: signal.h

Bash Trap and Kill Aliases

alias traps='trap -l|sed "s,\t,\n,g;s,),:,g;s,SIG,,g;s, \([0-9]\),\1,g"'
alias kills='for h in $(builtin kill -l); do echo "$(builtin kill -l $h): $h"; done';

traps output

1: HUP
2: INT
3: QUIT
4: ILL
5: TRAP
6: ABRT
7: BUS
8: FPE
9: KILL
10: USR1
11: SEGV
12: USR2
13: PIPE
14: ALRM
15: TERM
16: STKFLT
17: CHLD
18: CONT
19: STOP
20: TSTP
21: TTIN
22: TTOU
23: URG
24: XCPU
25: XFSZ
26: VTALRM
27: PROF
28: WINCH
29: IO
30: PWR
31: SYS
34: RTMIN
35: RTMIN+1
36: RTMIN+2
37: RTMIN+3
38: RTMIN+4
39: RTMIN+5
40: RTMIN+6
41: RTMIN+7
42: RTMIN+8
43: RTMIN+9
44: RTMIN+10
45: RTMIN+11
46: RTMIN+12
47: RTMIN+13
48: RTMIN+14
49: RTMIN+15
50: RTMAX-14
51: RTMAX-13
52: RTMAX-12
53: RTMAX-11
54: RTMAX-10
55: RTMAX-9
56: RTMAX-8
57: RTMAX-7
58: RTMAX-6
59: RTMAX-5
60: RTMAX-4
61: RTMAX-3
62: RTMAX-2
63: RTMAX-1
64: RTMAX

Trap and Kill Functions

Using traps are helpful to use in functions like this:

function yn ()
{ 
  local post=`tput op;tput sgr0;tput cnorm`;
  trap 'echo -e "${post}"' 1 2 3 6 15 RETURN;
  (
    local a= YN=1 msg="${@}" pre=`tput civis;tput setab 4;tput setaf 7` post=`tput op;tput sgr0;tput cnorm`;
    trap 'echo -e "${post}"' 1 2 3 6 15 RETURN;
    until [[ $YN == 0 || $YN == 65 ]]; do
      echo -en "\n\n${pre}:: ${msg}? (y/N)${post} " && read -s -n 1 a;
      case $a in 
        [yY]) YN=0;
        ;;
        [nN]) YN=65;
        ;;
      esac;
      echo;
    done;
    return $YN
  )
}

Info about a Signal number or name

function trapinfo () 
{ 
   local signum=${1:-} sigdesc="`trapdesc ${1:-}`";
   case $signum in 
    1 | HUP | SIGHUP) echo "${signum}: ${sigdesc}: HUP - SIGHUP"
     ;;
    2 | INT | SIGINT) echo "${signum}: ${sigdesc}: INT - SIGINT"
     ;;
    3 | QUIT | SIGQUIT) echo "${signum}: ${sigdesc}: QUIT - SIGQUIT"
     ;;
    4 | ILL | SIGILL) echo "${signum}: ${sigdesc}: ILL - SIGILL"
     ;;
    5 | TRAP | SIGTRAP) echo "${signum}: ${sigdesc}: TRAP - SIGTRAP"
     ;;
    6 | ABRT | SIGABRT) echo "${signum}: ${sigdesc}: ABRT - SIGABRT"
     ;;
    7 | BUS | SIGBUS) echo "${signum}: ${sigdesc}: BUS - SIGBUS"
     ;;
    8 | FPE | SIGFPE) echo "${signum}: ${sigdesc}: FPE - SIGFPE"
     ;;
    9 | KILL | SIGKILL) echo "${signum}: ${sigdesc}: KILL - SIGKILL"
     ;;
    10 | USR1 | SIGUSR1) echo "${signum}: ${sigdesc}: USR1 - SIGUSR1"
     ;;
    11 | SEGV | SIGSEGV) echo "${signum}: ${sigdesc}: SEGV - SIGSEGV"
     ;;
    12 | USR2 | SIGUSR2) echo "${signum}: ${sigdesc}: USR2 - SIGUSR2"
     ;;
    13 | PIPE | SIGPIPE) echo "${signum}: ${sigdesc}: PIPE - SIGPIPE"
     ;;
    14 | ALRM | SIGALRM) echo "${signum}: ${sigdesc}: ALRM - SIGALRM"
     ;;
    15 | TERM | SIGTERM) echo "${signum}: ${sigdesc}: TERM - SIGTERM"
     ;;
    16 | STKFLT | SIGSTKFLT) echo "${signum}: ${sigdesc}: STKFLT - SIGSTKFLT"
     ;;
    17 | CHLD | SIGCHLD) echo "${signum}: ${sigdesc}: CHLD - SIGCHLD"
     ;;
    18 | CONT | SIGCONT) echo "${signum}: ${sigdesc}: CONT - SIGCONT"
     ;;
    19 | STOP | SIGSTOP) echo "${signum}: ${sigdesc}: STOP - SIGSTOP"
     ;;
    20 | TSTP | SIGTSTP) echo "${signum}: ${sigdesc}: TSTP - SIGTSTP"
     ;;
    21 | TTIN | SIGTTIN) echo "${signum}: ${sigdesc}: TTIN - SIGTTIN"
     ;;
    22 | TTOU | SIGTTOU) echo "${signum}: ${sigdesc}: TTOU - SIGTTOU"
     ;;
    23 | URG | SIGURG) echo "${signum}: ${sigdesc}: URG - SIGURG"
     ;;
    24 | XCPU | SIGXCPU) echo "${signum}: ${sigdesc}: XCPU - SIGXCPU"
     ;;
    25 | XFSZ | SIGXFSZ) echo "${signum}: ${sigdesc}: XFSZ - SIGXFSZ"
     ;;
    26 | VTALRM | SIGVTALRM) echo "${signum}: ${sigdesc}: VTALRM - SIGVTALRM"
     ;;
    27 | PROF | SIGPROF) echo "${signum}: ${sigdesc}: PROF - SIGPROF"
     ;;
    28 | WINCH | SIGWINCH) echo "${signum}: ${sigdesc}: WINCH - SIGWINCH"
     ;;
    29 | IO | SIGIO) echo "${signum}: ${sigdesc}: IO - SIGIO"
     ;;
    30 | PWR | SIGPWR) echo "${signum}: ${sigdesc}: PWR - SIGPWR"
     ;;
    31 | SYS | SIGSYS) echo "${signum}: ${sigdesc}: SYS - SIGSYS"
     ;;
    34 | RTMIN | SIGRTMIN) echo "${signum}: ${sigdesc}: RTMIN - SIGRTMIN"
     ;;
    35 | RTMIN+1 | SIGRTMIN+1) echo "${signum}: ${sigdesc}: RTMIN+1 - SIGRTMIN+1"
     ;;
    36 | RTMIN+2 | SIGRTMIN+2) echo "${signum}: ${sigdesc}: RTMIN+2 - SIGRTMIN+2"
     ;;
    37 | RTMIN+3 | SIGRTMIN+3) echo "${signum}: ${sigdesc}: RTMIN+3 - SIGRTMIN+3"
     ;;
    38 | RTMIN+4 | SIGRTMIN+4) echo "${signum}: ${sigdesc}: RTMIN+4 - SIGRTMIN+4"
     ;;
    39 | RTMIN+5 | SIGRTMIN+5) echo "${signum}: ${sigdesc}: RTMIN+5 - SIGRTMIN+5"
     ;;
    40 | RTMIN+6 | SIGRTMIN+6) echo "${signum}: ${sigdesc}: RTMIN+6 - SIGRTMIN+6"
     ;;
    41 | RTMIN+7 | SIGRTMIN+7) echo "${signum}: ${sigdesc}: RTMIN+7 - SIGRTMIN+7"
     ;;
    42 | RTMIN+8 | SIGRTMIN+8) echo "${signum}: ${sigdesc}: RTMIN+8 - SIGRTMIN+8"
     ;;
    43 | RTMIN+9 | SIGRTMIN+9) echo "${signum}: ${sigdesc}: RTMIN+9 - SIGRTMIN+9"
     ;;
    44 | RTMIN+10 | SIGRTMIN+10) echo "${signum}: ${sigdesc}: RTMIN+10 - SIGRTMIN+10"
     ;;
    45 | RTMIN+11 | SIGRTMIN+11) echo "${signum}: ${sigdesc}: RTMIN+11 - SIGRTMIN+11"
     ;;
    46 | RTMIN+12 | SIGRTMIN+12) echo "${signum}: ${sigdesc}: RTMIN+12 - SIGRTMIN+12"
     ;;
    47 | RTMIN+13 | SIGRTMIN+13) echo "${signum}: ${sigdesc}: RTMIN+13 - SIGRTMIN+13"
     ;;
    48 | RTMIN+14 | SIGRTMIN+14) echo "${signum}: ${sigdesc}: RTMIN+14 - SIGRTMIN+14"
     ;;
    49 | RTMIN+15 | SIGRTMIN+15) echo "${signum}: ${sigdesc}: RTMIN+15 - SIGRTMIN+15"
     ;;
    50 | RTMAX-14 | SIGRTMAX-14) echo "${signum}: ${sigdesc}: RTMAX-14 - SIGRTMAX-14"
     ;;
    51 | RTMAX-13 | SIGRTMAX-13) echo "${signum}: ${sigdesc}: RTMAX-13 - SIGRTMAX-13"
     ;;
    52 | RTMAX-12 | SIGRTMAX-12) echo "${signum}: ${sigdesc}: RTMAX-12 - SIGRTMAX-12"
     ;;
    53 | RTMAX-11 | SIGRTMAX-11) echo "${signum}: ${sigdesc}: RTMAX-11 - SIGRTMAX-11"
     ;;
    54 | RTMAX-10 | SIGRTMAX-10) echo "${signum}: ${sigdesc}: RTMAX-10 - SIGRTMAX-10"
     ;;
    55 | RTMAX-9 | SIGRTMAX-9) echo "${signum}: ${sigdesc}: RTMAX-9 - SIGRTMAX-9"
     ;;
    56 | RTMAX-8 | SIGRTMAX-8) echo "${signum}: ${sigdesc}: RTMAX-8 - SIGRTMAX-8"
     ;;
    57 | RTMAX-7 | SIGRTMAX-7) echo "${signum}: ${sigdesc}: RTMAX-7 - SIGRTMAX-7"
     ;;
    58 | RTMAX-6 | SIGRTMAX-6) echo "${signum}: ${sigdesc}: RTMAX-6 - SIGRTMAX-6"
     ;;
    59 | RTMAX-5 | SIGRTMAX-5) echo "${signum}: ${sigdesc}: RTMAX-5 - SIGRTMAX-5"
     ;;
    60 | RTMAX-4 | SIGRTMAX-4) echo "${signum}: ${sigdesc}: RTMAX-4 - SIGRTMAX-4"
     ;;
    61 | RTMAX-3 | SIGRTMAX-3) echo "${signum}: ${sigdesc}: RTMAX-3 - SIGRTMAX-3"
     ;;
    62 | RTMAX-2 | SIGRTMAX-2) echo "${signum}: ${sigdesc}: RTMAX-2 - SIGRTMAX-2"
     ;;
    63 | RTMAX-1 | SIGRTMAX-1) echo "${signum}: ${sigdesc}: RTMAX-1 - SIGRTMAX-1"
     ;;
    64 | RTMAX | SIGRTMAX) echo "${signum}: ${sigdesc}: RTMAX - SIGRTMAX"
     ;;
    65) echo "${signum}: ${sigdesc}: invalid signal specification"
     ;;
    *) echo "${signum}: ${sigdesc}: Unknown Signal"
     ;;
   esac
}

Function to describe the signal

function trapdesc () 
{ 
   case $1 in 
    0) echo "Bogus signal"
     ;;
    1) echo "Hangup"
     ;;
    2) echo "Interrupt"
     ;;
    3) echo "Quit"
     ;;
    4) echo "Illegal instruction"
     ;;
    5) echo "BPT trace/trap"
     ;;
    6) echo "ABORT instruction"
     ;;
    7) echo "Bus error"
     ;;
    8) echo "Floating point exception"
     ;;
    9) echo "Killed"
     ;;
    10) echo "User signal 1"
     ;;
    11) echo "Segmentation fault"
     ;;
    12) echo "User signal 2"
     ;;
    13) echo "Broken pipe"
     ;;
    14) echo "Alarm clock"
     ;;
    15) echo "Terminated"
     ;;
    17) echo "Child death or stop"
     ;;
    18) echo "Continue"
     ;;
    19) echo "Stopped (signal)"
     ;;
    20) echo "Stopped"
     ;;
    21) echo "Stopped (tty input)"
     ;;
    22) echo "Stopped (tty output)"
     ;;
    23) echo "Urgent IO condition"
     ;;
    24) echo "CPU limit"
     ;;
    25) echo "File limit"
     ;;
    26) echo "Alarm (virtual)"
     ;;
    27) echo "Alarm (profile)"
     ;;
    28) echo "Window changed"
     ;;
    29) echo "I/O ready"
     ;;
    30) echo "power failure imminent"
     ;;
    31) echo "Bad system call"
     ;;
    *) echo "Unknown"
     ;;
   esac
}

Bash Functions and Aliases for Traps, Kills, and Signals… originally appeared on AskApache.com

The post Bash Functions and Aliases for Traps, Kills, and Signals appeared first on AskApache.

PHP fsockopen for FAST DNS lookups over UDP

$
0
0

AskApache.com

While reading up on gethostbyaddr on PHP.net, I saw a nice idea for using fsockopen to connect over UDP port 53 to any Public DNS server, like Google's 8.8.8.8, and sending the reverse addr lookup in oh about 100 bytes, then getting the response in oh about 150 bytes! All in less than a second. This would be extremely valuable for use in things like my online header tool because it's faster than any other method. As usual, I went a bit overboard optimizing it to be lean and fast.

It's also a fairly decent example of how to use fsockopen in general.. Fsockopen enables super-hero-like tricks.

PHP fsockopen for DNS lookups

The function has 3 arguments.

  • An ip address to lookup.
  • A DNS server to query.
  • And a timeout in seconds.

Using the 6 fastest DNS servers

This list includes OpenDNS, UltraDNS, Level3, RoadRunner, and of course, Google DNS (see wikipedia for more).

$ip = '208.86.158.195';
foreach ( array('8.8.8.8', '156.154.70.1', '208.67.222.222', '156.154.70.1', '209.244.0.4', '216.146.35.35') as $dns) {
  echo gethostbyaddr_timeout( $ip, $dns, 1 );
}

The gethostbyaddr source

View the syntax highlighted source.

<?php

function gethostbyaddr_timeout$ip$dns$timeout ) {
    
    
// random transaction number (for routers etc to get the reply back)
    
$data rand1077 ) . "\1\0\0\1\0\0\0\0\0\0";
    
    
// octals in the array, keys are strlen of bit
    
$bitso = array("","\1","\2","\3" );
    foreach( 
array_reverseexplode'.'$ip ) ) as $bit ) {
        
$l=strlen($bit);
        
$data.="{$bitso[$l]}".$bit;
    }
    
    
// and the final bit of the request
    
$data .= "\7in-addr\4arpa\0\0\x0C\0\1";
        
    
// create UDP socket
    
$errno $errstr 0;
    
$fp fsockopen"udp://{$dns}"53$errno$errstr$timeout );
    if( ! 
$fp || ! is_resource$fp ) )
        return 
$errno;

    if( 
function_exists'socket_set_timeout' ) ) {
        
socket_set_timeout$fp$timeout );
    } elseif ( 
function_exists'stream_set_timeout' ) ) {
        
stream_set_timeout$fp$timeout );
    }


    
// send our request (and store request size so we can cheat later)
    
$requestsize fwrite$fp$data );
    
$max_rx $requestsize 3;
    
    
$start time();
    
$responsesize 0;
    while ( 
$received $max_rx && ( ( time() - $start ) < $timeout ) && ($buf fread$fp) ) !== false ) {
        
$responsesize++;
        
$response .= $buf;
    }
    //echo 
"[tx: $requestsize bytes]  [rx: {$responsesize} bytes]";

    
// hope we get a reply
    
if ( is_resource$fp ) )
        
fclose$fp );

    
// if empty response or bad response, return original ip
    
if ( empty( $response ) || bin2hexsubstr$response$requestsize 2) ) != '000c' )
        return 
$ip;
        
    
// set up our variables
    
$host '';
    
$len $loops 0;
    
    
// set our pointer at the beginning of the hostname uses the request size from earlier rather than work it out
    
$pos $requestsize 12;
    do {
        
// get segment size
        
$len unpack'c'substr$response$pos) );
        
        
// null terminated string, so length 0 = finished - return the hostname, without the trailing .
        
if ( $len[1] == )
            return 
substr$host0, -);
            
        
// add segment to our host
        
$host .= substr$response$pos 1$len[1] ) . '.';
        
        
// move pointer on to the next segment
        
$pos += $len[1] + 1;
        
        
// recursion protection
        
$loops++;
    }
    while ( 
$len[1] != && $loops 20 );
    
    
// return the ip in case 
    
return $ip;
}


?>

Download and Copy Code

Or download from: gethostbyaddr.txt

function gethostbyaddr_timeout( $ip, $dns, $timeout = 3 ) {
  // idea from http://www.php.net/manual/en/function.gethostbyaddr.php#46869
  // http://www.askapache.com/pub/php/gethostbyaddr.php
  
    // random transaction number (for routers etc to get the reply back)
    $data = rand( 10, 77 ) . "\1\0\0\1\0\0\0\0\0\0";
  
  // octals in the array, keys are strlen of bit
  $bitso = array("","\1","\2","\3" );
  foreach( array_reverse( explode( '.', $ip ) ) as $bit ) {
    $l=strlen($bit);
    $data.="{$bitso[$l]}".$bit;
  }
  
    // and the final bit of the request
  $data .= "\7in-addr\4arpa\0\0\x0C\0\1";
    
    // create UDP socket
  $errno = $errstr = 0;
    $fp = fsockopen( "udp://{$dns}", 53, $errno, $errstr, $timeout );
  if( ! $fp || ! is_resource( $fp ) )
    return $errno;
 
  if( function_exists( 'socket_set_timeout' ) ) {
    socket_set_timeout( $fp, $timeout );
  } elseif ( function_exists( 'stream_set_timeout' ) ) {
    stream_set_timeout( $fp, $timeout );
  }
 
    // send our request (and store request size so we can cheat later)
    $requestsize = fwrite( $fp, $data );
  $max_rx = $requestsize * 3;
  
  $start = time();
  $responsesize = 0;
  while ( $received < $max_rx && ( ( time() - $start ) < $timeout ) && ($buf = fread( $fp, 1 ) ) !== false ) {
    $responsesize++;
    $response .= $buf;
  }
  // echo "[tx: $requestsize bytes]  [rx: {$responsesize} bytes]";
 
    // hope we get a reply
    if ( is_resource( $fp ) )
    fclose( $fp );
 
  // if empty response or bad response, return original ip
    if ( empty( $response ) || bin2hex( substr( $response, $requestsize + 2, 2 ) ) != '000c' )
    return $ip;
    
  // set up our variables
  $host = '';
  $len = $loops = 0;
  
  // set our pointer at the beginning of the hostname uses the request size from earlier rather than work it out
  $pos = $requestsize + 12;
  do {
    // get segment size
    $len = unpack( 'c', substr( $response, $pos, 1 ) );
    
    // null terminated string, so length 0 = finished - return the hostname, without the trailing .
    if ( $len[1] == 0 )
      return substr( $host, 0, -1 );
      
    // add segment to our host
    $host .= substr( $response, $pos + 1, $len[1] ) . '.';
    
    // move pointer on to the next segment
    $pos += $len[1] + 1;
    
    // recursion protection
    $loops++;
  }
  while ( $len[1] != 0 && $loops < 20 );
  
  // return the ip in case 
  return $ip;
}

PHP fsockopen for FAST DNS lookups over UDP… originally appeared on AskApache.com

The post PHP fsockopen for FAST DNS lookups over UDP appeared first on AskApache.

Separate favicons for the Frontend and Backend

$
0
0

AskApache.com

Here's a nifty little idea I had that has some merit. Separate favicons for separate areas of a site. Basically, I can't live without Firefox or Chrome and the way they use multiple tabs.

Separate favicons for the Frontend and Backend

I usually have several tabs open for a single site. Some tabs are in the backend, usually meaning WordPress administration area, and others are in the frontend, meaning the homepage or viewing a post. I'm constantly going back and forth between tabs, often to edit a post, and then switch to the preview of the post. Now, with 50 tabs open at one time, which isn't very unusual for me, it can become difficult to quickly spot which tab is which. Solution? Create 2 favicons. One for the frontend, and a different one for the backend! This makes it soooo much easier to quickly switch to the correct tab, and even though it's a fairly small trick/tip compared to most of the articles on this site, it's helpful enough that I wanted to put it out there for all you wonderful readers.

Separate favicons using WordPress

So there are many ways to do this, but probably the best is to just add a simple little function to your themes functions.php file.

Just add this to your functions.php file. Then you will need to create a favicon in your root folder where your wp-config.php file lives, and an admin-favicon.ico in your active themes folder where your style.css file lives.

View syntax-highlighted source.

<?php

function askapache_separate_favicons() {
    
    
// default for frontend
    
$favicon_uri WP_SITEURL '/favicon.ico';
    
    
// if in backend change to the admin-favicon.ico file located in the active theme directory where style.css is
    
if ( is_admin() ) $favicon_uri preg_replace'|https?://[^/]+|i'''get_stylesheet_directory_uri() ) . '/admin-favicon.ico';

    
// output the xhtml
    
echo '<link rel="shortcut icon" href="' $favicon_uri '" type="image/x-icon" />';
    
}
add_action'wp_head''askapache_separate_favicons' );
add_action'admin_head''askapache_separate_favicons' );

?>

Raw Code

<?php
 
function askapache_separate_favicons() {
  
  // default for frontend
  $favicon_uri = WP_SITEURL . '/favicon.ico';
  
  // if in backend change to the admin-favicon.ico file located in the active theme directory where style.css is
  if ( is_admin() ) $favicon_uri = preg_replace( '|https?://[^/]+|i', '', get_stylesheet_directory_uri() ) . '/admin-favicon.ico';
 
  // output the xhtml
  echo '<link rel="shortcut icon" href="' . $favicon_uri . '" type="image/x-icon" />';
  
}
add_action( 'wp_head', 'askapache_separate_favicons' );
add_action( 'admin_head', 'askapache_separate_favicons' );
 
?>

Separate favicons for the Frontend and Backend… originally appeared on AskApache.com

The post Separate favicons for the Frontend and Backend appeared first on AskApache.

Htaccess Rewrite for Redirecting Uppercase to Lowercase

$
0
0

AskApache.com

Want to redirect all links with any uppercase characters to lowercase using pure mod_rewrite within an .htaccess file? Sure why not! OR how to use RewriteMap and mod_speling for those with access to httpd.conf

Htaccess to Redirect Uppercase to Lowercase

This should go at the very top of your .htaccess file. At least it should go above ANY other RewriteRules. That is because this uses a loop, until there are no more uppercase characters to convert, it will keep starting at the first HASCAPS:TRUE RewriteRule. Oh, and this is actually really quick and isn't gonna slow down anything.

RewriteEngine On
RewriteBase /
 
# If there are caps, set HASCAPS to true and skip next rule
RewriteRule [A-Z] - [E=HASCAPS:TRUE,S=1]
 
# Skip this entire section if no uppercase letters in requested URL
RewriteRule ![A-Z] - [S=28]
 
# Replace single occurance of CAP with cap, then process next Rule.
RewriteRule ^([^A]*)A(.*)$ $1a$2
RewriteRule ^([^B]*)B(.*)$ $1b$2
RewriteRule ^([^C]*)C(.*)$ $1c$2
RewriteRule ^([^D]*)D(.*)$ $1d$2
RewriteRule ^([^E]*)E(.*)$ $1e$2
RewriteRule ^([^F]*)F(.*)$ $1f$2
RewriteRule ^([^G]*)G(.*)$ $1g$2
RewriteRule ^([^H]*)H(.*)$ $1h$2
RewriteRule ^([^I]*)I(.*)$ $1i$2
RewriteRule ^([^J]*)J(.*)$ $1j$2
RewriteRule ^([^K]*)K(.*)$ $1k$2
RewriteRule ^([^L]*)L(.*)$ $1l$2
RewriteRule ^([^M]*)M(.*)$ $1m$2
RewriteRule ^([^N]*)N(.*)$ $1n$2
RewriteRule ^([^O]*)O(.*)$ $1o$2
RewriteRule ^([^P]*)P(.*)$ $1p$2
RewriteRule ^([^Q]*)Q(.*)$ $1q$2
RewriteRule ^([^R]*)R(.*)$ $1r$2
RewriteRule ^([^S]*)S(.*)$ $1s$2
RewriteRule ^([^T]*)T(.*)$ $1t$2
RewriteRule ^([^U]*)U(.*)$ $1u$2
RewriteRule ^([^V]*)V(.*)$ $1v$2
RewriteRule ^([^W]*)W(.*)$ $1w$2
RewriteRule ^([^X]*)X(.*)$ $1x$2
RewriteRule ^([^Y]*)Y(.*)$ $1y$2
RewriteRule ^([^Z]*)Z(.*)$ $1z$2
 
# If there are any uppercase letters, restart at very first RewriteRule in file.
RewriteRule [A-Z] - [N]
 
RewriteCond %{ENV:HASCAPS} TRUE
RewriteRule ^/?(.*) /$1 [R=301,L]

Using RewriteMap in httpd.conf

This is technically a faster way to do this, but it has to be in the httpd.conf file, not .htaccess

RewriteEngine on
RewriteBase /
RewriteMap lowercase int:tolower
RewriteCond $1 [A-Z]
RewriteRule ^/?(.*)$ /${lowercase:$1} [R=301,L]

Using mod_speling in httpd.conf

You can check out enabling the mod_speling apache module also. I personally don't use it, but many people love it.

<IfModule mod_speling.c>
CheckCaseOnly On
CheckSpelling On
</IfModule>

Htaccess Rewrite for Redirecting Uppercase to Lowercase… originally appeared on AskApache.com

The post Htaccess Rewrite for Redirecting Uppercase to Lowercase appeared first on AskApache.

Bash alternative to Reflector for Ranking Mirrors

$
0
0

AskApache.com

So if you don't already know, I am a long-time user and supporter of Arch Linux. Arch uses a package management tool called pacman that works similarly to yum or apt, but much better IMHO. It uses a list of mirrors to perform the actual downloading of the package files, so you want the fastest mirrors to be in the mirror list. The old way is to use reflector to rank the speed of the mirrors, which is a python script. My way is pure bash using curl, sed, awk, xargs, and sort. Very simple and IMHO more effective than reflector.

Creating /etc/pacman.d/mirrorlist

Just run the script and redirect the output to /etc/pacman.d/mirrorlist like this:

$ ./reflector.sh | sudo tee /etc/pacman.d/mirrorlist

Bash alternative to Reflector for Ranking Mirrors

How it works

Well it's simple, essentially it performs these steps:

  1. Fetch the current list of (only current 100%) mirrors from the official site.
  2. Use curl to request a small 257 byte file from each of those mirrors (about 200-300), 40 at a time and save the fastest 50. This also gets the dns cached for the next step
  3. Use curl to request a 100 kb file from each of those mirrors, measuring the total time of the request, and the speed of the download, 10 at a time.
  4. Finally, merge the results of both of those tests into a list of 50 mirrors in the format for outputting directly to /etc/pacman.d/mirrorlist

Why pure bash over reflector?

Well because I like my systems extra crazy lean, I often don't want to install python right after an initial install of Arch. Also, this is much faster and easier on the system resources, and I believe it is also more accurate. I'd like to encourage others to turn to pure shell scripting to do simple tasks like this, often that is a better long-term solution than building a new piece of software. But I'm not against reflector, it's a pretty awesome bit of python with many features.

reflector.sh Source

Download reflector.sh

#!/bin/bash
# Updated: Tues May 07 21:04:12 2013 by webmaster@askapache
# @ http://www.askapache.com/shellscript/reflector-ranking-mirrors.html
# Copyright (C) 2013 Free Software Foundation, Inc.
#
#   This program is free software: you can redistribute it and/or modify
#   it under the terms of the GNU General Public License as published by
#   the Free Software Foundation, either version 3 of the License, or
#   (at your option) any later version.
#
#   This program is distributed in the hope that it will be useful,
#   but WITHOUT ANY WARRANTY; without even the implied warranty of
#   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
#   GNU General Public License for more details.
#
#   You should have received a copy of the GNU General Public License
#   along with this program.  If not, see <http://www.gnu.org/licenses/>.
 
# if mirrors exists, cat it, otherwise create it
function get_mirrors () #{{{1
{
   if [[ -s $MIRRORS ]]; then
          cat $MIRRORS;
   else
          curl -LksS -o - 'https://www.archlinux.org/mirrors/status/json/' | \
          sed 's,{,\n{,g' | sed -n '/rsync/d; /pct": 1.0/p' | sed 's,^.*"url": "\([^"]\+\)".*,\1,g' > $MIRRORS
          cat $MIRRORS;
   fi
}
 
function get_core_urls () #{{{1
{
   get_mirrors | sed "s,$,core/os/${ARCH}/core.db.tar.gz,g"
}
 
function get_gcc_urls () #{{{1
{
   get_mirrors | sed "s,$,core/os/${ARCH}/${GCC_URL},g"
}
 
# rm tmp file on exit
trap "exitcode=\$?; (rm -f \$MIRRORS 2>/dev/null;) && exit \$exitcode" 0;
trap "exit 1" 1 2 13 15;
 
# file containing mirror urls
MIRRORS=`(mktemp -t reflector-mirrorsXXXX) 2>/dev/null` && test -w "$MIRRORS" || MIRRORS=~/reflector.mirrorsXXX
 
# arch
ARCH=`(uname -m) 2>/dev/null` || ARCH=x86_64
 
# the gcc file
GCC_URL=$( curl -LksSH --url ftp://ftp.archlinux.org/core/os/${ARCH}/ 2>/dev/null | sed -n 's/^.*\ \(gcc-[0-9]\+.*.tar.xz.sig\)\ -.*$/\1/gp' );
 
{
   # faster as primarily used to pre-resolve dns for 2nd core test
   get_gcc_urls | xargs -I'{}' -P40 curl -Lks -o /dev/null -m 3 --retry 0 --no-keepalive -w '%{time_total}@%{speed_download}@%{url_effective}\n' --url '{}' |\
   sort -t@ -k2 -nr | head -n 50 | cut -d'@' -f3 | sed 's,core/os/'"${ARCH}/${GCC_URL}"',$repo/os/$arch,g'
 
   get_core_urls | xargs -I'{}' -P10 curl -Lks -o /dev/null -m 5 --retry 0 --no-keepalive -w '%{time_total}@%{speed_download}@%{url_effective}\n' --url '{}' |\
   sort -t@ -k2 -nr | head -n 50 | cut -d'@' -f3 | sed 's,core/os/'"${ARCH}"'/core.db.tar.gz,$repo/os/$arch,g'
} | sed 's,^,Server = ,g' | awk '{ if (!h[$0]) { print $0; h[$0]=1 } }'
 
exit $?;

Xargs running curl in parallel

Just shows the output of htop while running the script.

Bash alternative to Reflector for Ranking Mirrors

Bash alternative to Reflector for Ranking Mirrors… originally appeared on AskApache.com

The post Bash alternative to Reflector for Ranking Mirrors appeared first on AskApache.

Show Events that Occurred on this day in the Past

$
0
0

AskApache.com

The Function: askapache_calendar

Note the screenshot is awesome due to my custom .vimrc. It is missing the randomized color in the code below.

Show Events that Occurred on this day in the Past

Download OpenBSD Calendars

By default this script will use any calendars in your home directory ~/.calendars. If that directory does not exist when you call this function, it will tell you it is installing the calendars for you. It grabs the list of calendars straight from here. Here is the list:

  • calendar.all
  • calendar.birthday
  • calendar.canada
  • calendar.christian
  • calendar.computer
  • calendar.croatian
  • calendar.discord
  • calendar.fictional
  • calendar.french
  • calendar.german
  • calendar.history
  • calendar.holiday
  • calendar.judaic
  • calendar.music
  • calendar.openbsd
  • calendar.pagan
  • calendar.russian
  • calendar.space
  • calendar.ushistory
  • calendar.usholiday
  • calendar.world

Calendar Function Examples

Illustrating the random color output.

Show Events that Occurred on this day in the Past

Source Code for Calendar Function

Download askapache_calendar.sh

#!/bin/bash
# Updated: Thu May 16 21:07:54 2013
# @ http://www.askapache.com/linux/show-events-occurred-day-linux.html
# Copyright (C) 2013 Free Software Foundation, Inc.
#
#   This program is free software: you can redistribute it and/or modify
#   it under the terms of the GNU General Public License as published by
#   the Free Software Foundation, either version 3 of the License, or
#   (at your option) any later version.
#
#   This program is distributed in the hope that it will be useful,
#   but WITHOUT ANY WARRANTY; without even the implied warranty of
#   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
#   GNU General Public License for more details.
#
#   You should have received a copy of the GNU General Public License
#   along with this program.  If not, see <http://www.gnu.org/licenses/>.
 
function askapache_calendar ()
{
 
   if [[ ! -d ~/.calendar ]]; then
          echo "INSTALLING CALENDARS TO ~/.calendar";
 
          local U=http://www.openbsd.org/cgi-bin/cvsweb/~checkout~/src/usr.bin/calendar/calendars/ OPWD=$PWD;
 
          mkdir -pv ~/.calendar;
 
          cd ~/.calendar
          curl -# $(for f in $(curl -sSo - $U | sed '/">calendar\./!d; s,^.*;<a href="\./\([^"]\+\)".*$,\1,g'); do echo -n " -O $U$f"; done);
          cd $OPWD
 
          echo "CALENDARS INSTALLED";
   fi
 
   local COLOR=$( echo -en $(( $RANDOM % ${1:-$RANDOM} + 1 )) );
 
   echo -en "`tput setaf $COLOR`";
   sed -n "/$(date +%m\\/%d\\\|%b\*\ %d)/p" ~/.calendar/c*
   echo -en "`tput sgr0`"
}

Show Events that Occurred on this day in the Past… originally appeared on AskApache.com

The post Show Events that Occurred on this day in the Past appeared first on AskApache.


Alienware M18xR2 Review of Dells fastest Laptop

$
0
0

AskApache.com

Rise above all other laptops. The Alienware M18x is an extreme gaming laptop designed for those who want desktop quality performance but with the flexibility of a laptop — the best of both worlds. Note, there are 0 games on my M18x. This thing is a beast no doubt. Alienware M18xR2

Alienware M18xR2 Review of Dells fastest Laptop

Best Laptop Available

Well, Dell as in Alienware, and Alienware as in Clevo. This is the cream of the crop, the best of the best. I've had mine for over a year now and have had 0 issues with it. It's fast, as in crazy fast, the fastest of the the Alienware lineup.

Awards

Alienware M18xR2 — laptop — Best Laptops of 2012 awardBest Laptops of 2012 "The M18X R2 has a gorgeous 18-inch 1080p display and Alienware's awesome customizable backlighting."

Alienware M18xR2 -PCMag - Readers' Choice award Readers' Choice Award Alienware: "Its customers like what they are buying."

Alienware M18xR2 -LAPTOP Mag - Editors' Choice award Editors' Choice "The Alienware M18x R2 is the gaming notebook dreams are made of."

Alienware M18xR2 - Hot Hardware - Recommended award Recommended Award "Performance from the Alienware M18x R2 is out of this world."

Video

Alienware M18xR2

Experience Rating by Windows

Rated 7.5: Alienware M18xR2

Processor Intel(R) Core(TM) i7-3610QM CPU @ 2.30GHz 7.6 7.5Memory (RAM) 16.0 GB 7.7Graphics NVIDIA GeForce GTX 675M 7.5Gaming graphics 4095 MB Total available graphics memory 7.5Primary hard disk 62GB Free (161GB Total) 7.9

Alienware M18xR2 Review of Dells fastest Laptop… originally appeared on AskApache.com

The post Alienware M18xR2 Review of Dells fastest Laptop appeared first on AskApache.

King Penguin Linux Notebook

$
0
0

AskApache.com

King Penguin Linux Notebook The machine itself is super super ultra-thin, wafer thin, very cool looking. It's crazy light, I was amazed how slick it looked when it arrived.
Huge amount of open-source hardware/chipsets/etc.. The first time I went through the dmesg I was smiling. 2 USB 3.0 ports, and they actually work as promised, very very fast transfer speeds to my USB 3.0 external SSD. Incredible speed, boot time is the fastest I've ever seen. Starts at $720, mine was $1,800

Who is ThinkPenquin?

About: Our products are freedom-compatible. Meaning they will work with just about any free software operating system. This is made possible by selling products with free software compatible chipsets.

Free software is a set of principles that ensure end-users retain full control over their computer. Free software can be used, studied, and modified without restriction.

The chipsets we use encourage community development and user participation. Users can not be locked into a vendor or product, be forced into an expensive upgrade, or have other digital restrictions placed on them.

King Penguin GNU / Linux Notebook

King Penguin Linux Notebook

King Penguin Linux NotebookThis is the highest-performance notebook machine sold by ThinkPenguin, and I've had mine for about 2 months. Like all the ThinkPenguin machines, this sucker is dirt cheap compared to mainstream dealers!! Starts at $720. Product Page. Below are the options I chose during the order, my total was only around $1,800! That's less than half what my Alienware M18xR2 cost.

Memory / RAM

For my customization the first thing of course was to max out the availabe RAM to 16GB DDR3.

Hard Drive

Knowing how important disk speed is, I opted for the 120GB SSD, which is blazing fast.

CPU / Processor

This machine does a lot of encryption and media tasks, like SSH, so I opted for the Quad-Core i7-2630QM 2.0GHz / 2.9 GHz turbo

Distribution / OS

Arch Linux! My favorite distro going back a decade.. and unlike most single-distro proponents (ubuntoo especially), I've tried hundreds. Mostly I'm using arch for personal use and Red Hat for professional use.

Things I like

The machine itself is super super tiny, wafer thin. It's incredibly light, really I was amazed when it arrived how thin and ultra-clean it was.

Huge amount of open-source hardware/chipsets/etc.. The first time I went through the dmesg I was smiling.

2 USB 3.0 ports, and they actually work as promised, very very fast.

Wifi and Ethernet ports excellent. Battery life decent. Incredible speed.

ThinkPenguin - Supports Freedom

I first heard about thinkpenguin from my FSF newsletter for members. If the FSF supports a company, you should all go try and support them as much as you can by purchasing from them.

I was actually there to purchase the first open-source (hardware and firmware/software) usb wireless-n adapter, and saw they actually specialize in selling machines. I'm hooked!

Specifications

Get it at: ThinkPenguin.com

Category Specification
Processor Up to 3rd Gen Intel i7
Screen 15.6" FHD (1920x1080) Matte 16:9
Wireless 802.11N Atheros Wifi (freedom compatible chipset)
Webcam 2.0M pixels HD video camera
Memory Up to 16GB
Battery 6 cell Lithium-Ion (about 7 hours, /w optimization)
Ports  2 x USB 2.0 ports
2 x USB 3.0 ports
1 x external VGA port
1 x HDMI output port
1 x Headphone jack
1 x Microphone jack
1 x RJ-45 LAN port (10/100/1000)
&hairsp;1 x DC-in jack
Touchpad & Keyboard multi-gesture and scrolling function
full size isolated keyboard with numeric pad
Chipset Mobile Intel® HM77 Express Chipset
Optical Drive Built-in Super-Multi Drive (supports DVD-RAM/R/RW/+/-/CD-R/RW)
Graphics Intel HD
Built-in Audio & Mic Yes
Approx. Dimensions 374(W) x 252(D) x 14 ~ 25.4(H)mm (Height excluded battery area)
14.73" x 9.92" x .55" ~ 1" inches
Weight 2.2kg, 4.8lbs (/w battery)
Default configuration Includes the latest release of Ubuntu
Compatible with Most other GNU/Linux flavours (hardware supports free & mainline kernels/project drivers)

King Penguin Linux Notebook… originally appeared on AskApache.com

The post King Penguin Linux Notebook appeared first on AskApache.

Building strace-plus

$
0
0

AskApache.com

Building strace-plusstrace+ is an improved version of strace that collects stack traces associated with each system call. Since system calls require an expensive user-kernel context switch, they are often sources of performance bottlenecks. strace+ allows programmers to do more detailed system call profiling and determine, say, which call sites led to costly syscalls and thus have potential for optimization.

strace vs strace+

strace vs strace+

Build Pre-requisites

  • binutils
  • autoconf
  • gdb
  • make
  • gcc-c++
  • gcc
  • gcc-x86_64-linux-gnu
  • glibc-static
  • python

Compile and Build strace+

  1. Check out the source code from Git (requires git >= 1.6.6)
    $ git clone https://code.google.com/p/strace-plus/
  2. Compile strace+
    $ cd strace-plus/
    $ autoreconf -f -i
    $ ./configure
    $ make
    $ cp strace strace+

Compile a "hello world" test program

  1. Create a file named hello.c. hello.c is a simple program that makes four write system calls via printf statements:
    #include <stdio.h>
     
    void bar() {
      printf("bar\n");
      printf("bar again\n");
    }
     
    void foo() {
      printf("foo\n");
      bar();
    }
     
    int main() {
      printf("Hello world\n");
      foo();
      return 0;
    }
  2. Compile it:
    $ gcc hello.c -o hello
  3. Test:
    $ ./hello
  4. Run strace+ on the hello executable to generate a trace file named hello.out.
    $ ./strace+ -o hello.out ./hello
  5. Post-process hello.out to print out a list of system calls each augmented with stack traces
    python scripts/pretty_print_strace_out.py hello.out --trace

Build Statically

I like to always try and compile tools statically if possible. Especially in a case like this where you don't want strace+ to replace strace.

$ cd strace-plus/
$ export CFLAGS="-Os -fomit-frame-pointer -static -static-libgcc -ffunction-sections -fdata-sections -falign-functions=1 -falign-jumps=1 -falign-labels=1 -falign-loops=1 -fno-unwind-tables -fno-asynchronous-unwind-tables -Wl,--gc-sections -Wl,-Map=strace.mapfile"
$ autoreconf -i -f
$ ./configure
$ make CFLAGS="$CFLAGS"
$ cp strace strace+
# Normal strace
$ file /usr/bin/strace
/usr/bin/strace: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.18, stripped
 
# Static strace-plus
$ file strace+
strace+: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), statically linked, not stripped

Bash Aliases Functions

Useful to stick in your .bash_profile

Alias: enhanced strace

alias strace='command strace -fq -s1000  -e trace=all 2>&1'

Alias: trace file calls

alias stracef='command strace -fq -s1000  -e trace=file 2>&1'

Function: tputtrace

function tputtrace ()
{
  ( 2>&2 strace -s5000 -e write tput $1 2>&1 )  | tee -a /dev/stderr | grep --color=always -o '"[^"]*"';
}

See Also

vistrace: a visualization of strace

Building strace-plus… originally appeared on AskApache.com

The post Building strace-plus appeared first on AskApache.

MySQL Performance Tuning Scripts and Know-How

$
0
0

AskApache.com

Unless you are a total linux-freak-guru like myself, and even if you are, it can be enormously challenging and somewhat overwhelming to locate and eliminate MySQL bottlenecks. While many DBAs focus on improving the performance of the queries themselves, this post will focus on the highest-impact items: MySQL Server Performance and OS Performance for MySQL.

This post is a "best-of" compilation of the tricks and scripts I have found to be the most effective over the past decade. I'd like to write a 50 page article but am limiting this to 1 page.

For anyone serious about High Performance MySQL, I would highly highly recommend the fantastic book: "High Performance MySQL: Optimization, Backups, Replication, and more" - O'Reilly. I have spent many hours poring over it's wisdom-filled pages and gaining much practical know-how.

MySQL Server Software

Each new MySQL server release contains ENORMOUS performance enhancements over previous versions. That is the absolute very first thing you should do: Upgrade your MySQL Server and Client libraries and keep them updated.

There are several "flavors" of MySQL believe it or not.. Most people use the stock MySQL Server. I, along with WikiPedia, Arch-Linux, and others, use MariaDB. MariaDB is a greatly enhanced 100% compatible replacement for the stock MySQL Server. It is based on the excellent work by the Percona project. The percona flavor of MySQL is the other truly improved version of MySQL to consider. I personally have spent a couple years using Percona, then I upgraded from Percona to MariaDB (which has a lot of Percona juju built in) and am no longer thinking about which version to go with. MariaDB is the bomb-diggity.

MySQL Engine

InnoDB not MyISAM. InnoDB may be surpassed by in-development engines like TokuDB. I ONLY use InnoDB, for everything.

Types of MySQL Servers to optimize

Seriously? Optimize EVERYTHING! The screenshots below are actual from one of my live servers. That server used to be 8GB RAM, but now as you may see in the screenshots, it is now only 2GB of RAM. I was able to save some serious $$$ by optimizing my server, without sacrificing speed... In fact I gained some speed in many instances.

I've used these optimization techniques on monster servers with 32GB of ram and many slaves, and also on a machine with 1GB of ram (running arch-linux).

Tuning Scripts for MySQL

The first thing to understand and believe is that there is absolutely no substitute for having a professional tune your DB. I personally use 2 professionals to tune clients DBs... I optimize it first, then I optimize it again after both pros are finished.

  1. A DBA who knows MySQL optimization inside and out, percona/mariadb experience = "the best".
  2. A Linux system admin GURU who can make sure the subtle and not-so-subtle settings and tweaks to the OS is geared for max performance.

If you are just learning or doing it yourself, props to you! In that case, you should utilize ALL 4 of these tools. The one thing you need to do before running any of them is make sure your MySQL server has been online for at least a week without restarting, otherwise the results will mostly all be questionable. I especially like the Tuning-Primer shell script, and the phpMyAdmin Advisor (which is fairly new to phpMyAdmin - using 4.1-DEV-BETA).

The biggest areas to focus in on (IMHO) are:

  1. MEMORY/RAM, specifically the buffers
  2. SWAP
  3. ACID - Do you need full ACID, or can you (likely) make some sacrifices there for speed
  4. tmp tables, tmpdir (use a tmpfs mounted dir)
  5. Thread/Connections - How many processes and threads should be running
  6. open_files / table_cache - May need to boost your /etc/security/limits.conf and your /etc/sysctl.conf

Tuning-Primer

MySQL Tuning Primer Script - tuning-primer.sh - This script takes information from "SHOW STATUS LIKE..." and "SHOW VARIABLES LIKE..." then attempts to produce sane recommendations for tuning server variables. It is compatible with all versions of MySQL 3.23 and above.

MySQL Performance Tuning Scripts and Know-How

phpMyAdmin Advisor

This tool is very similar to the tuning-primer tool. Nice and fast, and likely the most up-to-date tool.

MySQL Performance Tuning Scripts and Know-How

MySQLTuner

MySQLTuner: a script written in Perl that will assist you with your MySQL configuration and make recommendations for increased performance and stability.

MySQL Performance Tuning Scripts and Know-How

mysqlreport

mysqlreport: makes a friendly report of important MySQL status values. mysqlreport transforms the values from SHOW STATUS into an easy-to-read report that provides an in-depth understanding of how well MySQL is running. mysqlreport is a better alternative (and practically the only alternative) to manually interpreting SHOW STATUS.

MySQL Performance Tuning Scripts and Know-How

Monitoring MySQL

The mysqladmin command is great and all, but these 3 tools are much more useful for the specialized task of monitoring MySQL. The most powerful is innotop, then mytop, and finally the phpMyAdmin Monitor is great for general big-picture monitoring. Also, make sure you understand and use slow query logging and mysqldumpslow as well.

mytop

mytop - a top clone for MySQL.

MySQL Performance Tuning Scripts and Know-How

innotop

innotop

MySQL Performance Tuning Scripts and Know-How

phpMyAdmin Monitor

MySQL Performance Tuning Scripts and Know-How

More Resources

MySQL Performance Tuning Scripts and Know-How… originally appeared on AskApache.com

The post MySQL Performance Tuning Scripts and Know-How appeared first on AskApache.

RXVT Customization with ~/.Xresources

$
0
0

AskApache.com

rxvt-unicode screenshot running tmuxAfter many years of using all sorts of terminal emulators, from xterm to the Gnome terminal, to KDE Konsole to xfce4-terminal, lxterminal, vte, yakuake, rote, roxterm, putty, sakura, terminator, and finally I settled in for the long-haul with rxvt (rxvt-unicode) aka urxvt.

I have X setup to start urxvt on startup, and have urxvt setup to auto-start a tmux session. It's freaking sweet.

The customizations I have made to my urxvt are through the use of X resources.

My BOX: Slim -> Ratpoison -> URxvt -> Tmux -> Bash

Download my ~/.Xresources file.

rxvt-unicode (urxvt)

rxvt-unicode is a highly customizable terminal emulator forked from rxvt. Commonly known as urxvt, rxvt-unicode can be daemonized to run clients within a single process in order to minimize the use of system resources. Developed by Marc Lehmann, some of the more outstanding features of rxvt-unicode include international language support through Unicode, the ability to display multiple font types and support for Perl extensions. Also see: rxvt-unicode wiki page on ArchWiki.

Xresources vs. Xdefaults

Definitive Verbose Answer

~/.Xdefaults is the older method of storing X resources. This file is re-read every time an Xlib program is started. If X11 is used over the network, the file must be present on the same filesystem as the programs. ~/.Xresources is newer. It is loaded with xrdb into the RESOURCE_MANAGER property of the X11 root window. Whenever any program looks up a resource, it is read straight from RESOURCE_MANAGER. If this property does not exist, Xlib falls back to the old method of reading .Xdefaults on every program startup. Note that most distributions will load ~/.Xresources automatically if it is present, causing .Xdefaults to be ignored even if you have never run xrdb manually. The advantage of the new method is that it's enough to call xrdb once, and the resources will be available to any program running on this display, whether local or remote. (The name ~/.Xresources is only a convention – you can use xrdb to load any file, even .Xdefaults.)

Adding to .Xresources

You can stick this in your ~/.xinitrc file if need be.

[[ -f ~/.Xdefaults ]] && xrdb -merge ~/.Xdefaults

After you make a change to your ~/.Xdefaults or ~/.Xresources file you will need to reload it with: xrdb ~/.Xdefaults and then close rxvt and reopen.

Useful Xresources for rxvt

Note that I use the 256-color enabled unicode version, named rxvt-unicode, so the name is URxvt, you could also try Rxvt as the resource name prefix.

Download my ~/.Xresources file.

URxvt*termName: screen-256color
URxvt*geometry: 240x84
URxvt*loginShell: true
URxvt*scrollColor: #777777
URxvt*scrollstyle: plain
URxvt*scrollTtyKeypress: true
URxvt*scrollTtyOutput: false
URxvt*scrollWithBuffer: false
URxvt*secondaryScreen: true
URxvt*secondaryScroll: true
URxvt*skipScroll: true
URxvt*scrollBar: false
URxvt*scrollBar_right: false
URxvt*scrollBar_floating: false
URxvt*fading: 30
URxvt*utmpInhibit: false
URxvt*urgentOnBell: false
URxvt*visualBell: true
URxvt*mapAlert: true
URxvt*mouseWheelScrollPage: false
URxvt*background: Black
URxvt*foreground: White
URxvt*colorUL: yellow
URxvt*underlineColor: yellow
URxvt*font: -xos4-terminus-medium-*-*-*-14-*-*-*-*-*-iso8859-15,xft:terminus:pixelsize:12
URxvt*boldFont: -xos4-terminus-bold-*-*-*-14-*-*-*-*-*-iso8859-15,xft:terminus:bold:pixelsize:12
URxvt*italicFont: xft:Bitstream Vera Sans Mono:italic:autohint=true:pixelsize=12
URxvt*boldItalicFont: xft:Bitstream Vera Sans Mono:bold:italic:autohint=true:pixelsize=12
URxvt*saveLines: 0
URxvt*buffered: true
URxvt*hold: false
URxvt*internalBorder:
URxvt*print-pipe: cat > $HOME/$(echo urxvt.dump.$(date +'%Y%M%d%H%m%S'))
URxvt*perl-ext-common:
URxvt*perl-ext:

termName

Specifies the terminal type name to be set in the TERM environment variable. I use tmux so this is helpful.

URxvt*termName:  screen-256color

geometry

Create the window with the specified X window geometry [default 80x24].. base it on your $LINES and $COLUMNS.

URxvt*geometry:  240x84

loginShell

Start as a login shell by prepending a - to argv[0] of the shell. Again, for tmux, this is super-helpful and causes your bash login files like ~/.bash_profile to be loaded.

URxvt*loginShell:  true

scrollTtyKeypress

True: scroll to bottom when a non-special key pressed. Special keys are those which are intercepted by rxvt for special handling andnt passed onto the shell

URxvt*scrollTtyKeypress:  true

scrollTtyOutput

Do not scroll to bottom when tty receives output

URxvt*scrollTtyOutput:  false

scrollWithBuffer

Do not scroll with scrollback buffer when tty recieves new lines, adds some speed.. also, I use tmux scrollback buffers.

URxvt*scrollWithBuffer:  false

skipScroll

For speed. When receiving lots of lines, urxvt will only scroll once in a while (around 60x/sec), resulting in fewer updates. This can result in urxvt not ever displaying some of the lines it receives

URxvt*skipScroll:  true

scrollBar

Disable the scrollbar.. why waste valuable screen real-estate when you should be using tmux scrollback?

URxvt*scrollBar:  false

fading

Fade the text by the given percentage when focus is lost. This is neat, when I switch to a different window, or switch to a different machine ala synergy, it will fade the screen slightly.

URxvt*fading:  30

visualBell

Use visual bell on receipt of a bell character. Helpful to be used with inputrc and tmux.

URxvt*visualBell:  true

background

Use the specified colour as the windows background colour [default White]. Why in the world would you default white unless you are old-school... as in 70s.

URxvt*background:  Black

foreground

Use the specified colour as the windows foreground colour [default Black]. see above.

URxvt*foreground:  White

colorUL

Use the specified colour to disp1ay underlined characters when the foreground colour is the default. Makes it easier to notice, rxvt-unicode authors choice as well.

URxvt*colorUL:  yellow

underlineColor

If set, use the specified colour as the colour for the underline itself. If unset, use the foreground colour

URxvt*underlineColor:  yellow

font, boldFont, italicFont, boldItalicFont

A comma separated list of font names that are checked in order when trying to find glyphs for characters. Man for coding, nothing beats the terminus font.. nothing! Also, notice that boldFont, italicFont, and boldItalicFont are specified as well. This makes a huge difference you will notice right away.

URxvt*font:  -xos4-terminus-medium-*-*-*-14-*-*-*-*-*-iso8859-15,xft:terminus:pixelsize:12
URxvt*boldFont:  -xos4-terminus-bold-*-*-*-14-*-*-*-*-*-iso8859-15,xft:terminus:bold:pixelsize:12
URxvt*italicFont:  xft:Bitstream Vera Sans Mono:italic:autohint=true:pixelsize=12
URxvt*boldItalicFont:  xft:Bitstream Vera Sans Mono:bold:italic:autohint=true:pixelsize=12

saveLines

Save number lines in the scrollback buffer [default 64]. This resource is limited on most machines to 65535. I am a power-user, so I always use a multiplexer. Tmux if its available, otherwise screen. So I use the scrollback buffer in tmux or screen, which is much nicer.

URxvt*saveLines:  0

print-pipe

Specify a command pipe for vt100 printer [default lpr]. Use Print to initiate a screen dump to the printer and Ctrl-Print or Shift-Print to include the scrollback

URxvt*print-pipe:  cat > $HOME/$(echo urxvt.dump.$(date +'%Y%M%d%H%m%S'))

perl-ext

Comma-separated list(s) of perl extension scripts (default: "default") to use in this terminal instance, blank disables. By setting these both to blank, it completely disables perl from being initialized, thus much faster and smaller footprint. Plus it is more secure.

URxvt*perl-ext:
URxvt*perl-ext-common:

Output RXVT Resources

This simple command is built-in to rxvt to show a list of Resource Names. Useful for pasting into your ~/.Xresources or ~/.Xdefaults

# urxvt --help 2>&1| sed -n '/:  /s/^ */! URxvt*/gp'
! URxvt*termName:                       string
! URxvt*geometry:                       geometry
! URxvt*chdir:                          string
! URxvt*reverseVideo:                   boolean
! URxvt*loginShell:                     boolean
! URxvt*jumpScroll:                     boolean
! URxvt*skipScroll:                     boolean
! URxvt*pastableTabs:                   boolean
! URxvt*scrollstyle:                    mode
! URxvt*scrollBar:                      boolean
! URxvt*scrollBar_right:                boolean
! URxvt*scrollBar_floating:             boolean
! URxvt*scrollBar_align:                mode
! URxvt*thickness:                      number
! URxvt*scrollTtyOutput:                boolean
! URxvt*scrollTtyKeypress:              boolean
! URxvt*scrollWithBuffer:               boolean
! URxvt*inheritPixmap:                  boolean
! URxvt*transparent:                    boolean
! URxvt*tintColor:                      color
! URxvt*shading:                        number
! URxvt*blurRadius:                     HxV
! URxvt*fading:                         number
! URxvt*fadeColor:                      color
! URxvt*utmpInhibit:                    boolean
! URxvt*urgentOnBell:                   boolean
! URxvt*visualBell:                     boolean
! URxvt*mapAlert:                       boolean
! URxvt*meta8:                          boolean
! URxvt*mouseWheelScrollPage:           boolean
! URxvt*tripleclickwords:               boolean
! URxvt*insecure:                       boolean
! URxvt*cursorUnderline:                boolean
! URxvt*cursorBlink:                    boolean
! URxvt*pointerBlank:                   boolean
! URxvt*background:                     color
! URxvt*foreground:                     color
! URxvt*color0:                         color
! URxvt*color1:                         color
! URxvt*color2:                         color
! URxvt*color3:                         color
! URxvt*color4:                         color
! URxvt*color5:                         color

rxvt Resources with descriptions

And here is a really helpful command to output the entire rxvt resources with descriptions taken from the manpage. I start by running this and appending it to the ~/.Xresources file.

# man -Pcat urxvt | sed -n '/th: b/,/^B/p'|sed '$d'|sed '/^ \{7\}[a-z]/s/^ */^/g' | sed -e :a -e 'N;s/\n/@@/g;ta;P;D' | sed 's,\^\([^@]\+\)@*[\t ]*\([^\^]\+\),! \2\n! URxvt*\1\n\n,g' | sed 's,@@\(  \+\),\n\1,g' | sed 's,@*$,,g' | sed '/^[^!]/d' | tr -d "'\`"
! Compile xft: Attempt to find a visual with the given bit depth; option -depth.
! URxvt*depth: bitdepth
 
! Compile xft: Turn on/off double-buffering for xft (default enabled).  On some card/driver combination enabling it slightly decreases performance, on most it greatly helps it. The slowdown is small, so it should normally be enabled.
! URxvt*buffered: boolean

Lean 256 Color Output Function

I've actually spent a lot of time on this function, it's gotta be the fastest leanest code on the net for displaying all 256 colors in a very useful way.

function aa_256 () 
{ 
    local o= i= x=`tput op` cols=`tput cols` y= oo= yy=;
    y=`printf %$(($cols-6))s`;
    yy=${y// /=};
    for i in {0..256};
    do
        o=00${i};
        oo=`echo -en "setaf ${i}\nsetab ${i}\n"|tput -S`;
        echo -e "${o:${#o}-3:3} ${oo}${yy}${x}";
    done
}

256 Colors with aa_c666

In case you wondered how I generated that screenshot.. here's a BASH function I wrote. Also see my extremely verbose post: Terminal Escape Code Zen.

function aa_c666 () 
{ 
    local r= g= b= c= CR="`tput sgr0;tput init`" C="`tput op`" n="\n\n\n" t="  " s="    ";
    echo -e "${CR}${n}";
    function c666 () 
    { 
        local b= g=$1 r=$2;
        for ((b=0; b<6; b++))
        do
            c=$(( 16 + ($r*36) + ($g*6) + $b ));
            echo -en "setaf ${c}\nsetab ${c}\n" | tput -S;
            echo -en "${s}";
        done
    };
    function c666b () 
    { 
        local g=$1 r=;
        for ((r=0; r<6; r++))
        do
            echo -en " `c666 $g $r`${C} ";
        done
    };
    for ((g=0; g<6; g++))
    do
        c666b=`c666b $g`;
        echo -e " ${c666b}";
        echo -e " ${c666b}";
        echo -e " ${c666b}";
        echo -e " ${c666b}";
        echo -e " ${c666b}";
    done;
    echo -e "${CR}${n}${n}"
}

RXVT Customization with ~/.Xresources… originally appeared on AskApache.com

The post RXVT Customization with ~/.Xresources appeared first on AskApache.

PDF.js

$
0
0

AskApache.com

PDF.js Iconpdf.js is an HTML5 technology experiment that explores building a faithful and efficient Portable Document Format (PDF) renderer without native code assistance.

pdf.js is community-driven and supported by Mozilla Labs. Our goal is to create a general-purpose, web standards-based platform for parsing and rendering PDFs, and eventually release a PDF reader extension powered by pdf.js. Integration with Firefox is a possibility if the experiment proves successful.

Viewer Constants

var DEFAULT_URL = 'compressed.tracemonkey-pldi-09.pdf';
var DEFAULT_SCALE = 'auto';
var DEFAULT_SCALE_DELTA = 1.1;
var UNKNOWN_SCALE = 0;
var CACHE_SIZE = 20;
var CSS_UNITS = 96.0 / 72.0;
var SCROLLBAR_PADDING = 40;
var VERTICAL_PADDING = 5;
var MIN_SCALE = 0.25;
var MAX_SCALE = 4.0;
var SETTINGS_MEMORY = 20;
var SCALE_SELECT_CONTAINER_PADDING = 8;
var SCALE_SELECT_PADDING = 22;

Viewer options

Options for pdf.js's viewer that can be given at the URL level. This page is current as of 2013 April 21.

Multiple values of either type can be combined by separating with an ampersand (&) including after the hash (Example: #page=2&textLayer=off).

Options after the ?

var params = PDFView.parseQueryString(document.location.search.substring(1));
var file = params.file || DEFAULT_URL;
  • file=file.pdf: pdf filename to use

Options after the #

// Special debugging flags in the hash section of the URL.
var hash = document.location.hash.substring(1);
var hashParams = PDFView.parseQueryString(hash);
  • page=1: page number
  • zoom=100: zoom level
  • nameddest=here: go to a named destionation
  • pagemode=[thumbs|bookmarks|outline|none]:
  • locale=en-US: set a localization
  • textLayer=[off|visible|shadow|hover] - Disables or reveals the text layer that is used for text selection.
  • disableRange=true: set to true to disable chunked/206 partial content viewing, load entire pdf from start
  • disableAutoFetch=true: If there are no pending requests, automatically fetch the next prevent unfetched chunk of the PDF
  • disableWorker=true - Makes it easier to use debugging tools like firebug. Support, create a new web worker and test if it/the browser fullfills all requirements to run parts of pdf.js in a web worker
  • disableFontFace=true: use custom fonts
  • disableHistory=true: manipulation of the history
  • pdfBug=all - Enables all the debugging tools. You can optionally enable specific tools by specifying them by their id e.g. pdfBug=FontInspector or pdfBug=Stepper,FontInspector

Debuggery

Debugging PDF.js

URL Parameters

pdf.js has several special url parameters to alter how pdf.js works and enable debugging tools. All of these parameters go into the hash section of the url (after the # symbol) and follow a query string syntax (e.g. #param1=value1&param2=value2). Note: since all of these parameters are in the hash section you have to refresh the page after adding them.

if ('pdfBug' in hashParams) {
  PDFJS.pdfBug = true;
  var pdfBug = hashParams['pdfBug'];
  var enabled = pdfBug.split(',');
  PDFBug.enable(enabled);
  PDFBug.init();
}
  • pdfBug=all - Enables all the debugging tools. You can optionally enable specific tools by specifying them by their id e.g. pdfBug=FontInspector or pdfBug=Stepper,FontInspector. More about PDFBug below.
  • disableWorker=true - Disables the worker which makes it easier to use debugging tools like firebug where workers aren't supported yet.
  • textLayer=[off|visible|shadow|hover] - Disables or reveals the text layer that is used for text selection.

PDFBug Tools

To enable see above.

Font Inspector

id: FontInspector

The font inspector allows you to view what fonts are used within page. It also allows you to download the font using Save Link As.. and naming it with a .otf extension. See the section below on debugging fonts.

Stepper

id: Stepper

The stepper tool makes it so you can step through the drawing commands one at a time and hopefully find where a possible issue is coming from. It is also useful for learning how a PDF is structured and the order of its operations. To walk through the drawing commands first add a break point, refresh the page and then use the keys s to step one command at a time or c to continue until the next breakpoint(line that is checked).

github pdf.js Wiki Docs

Full list.

More Info

PDF.js… originally appeared on AskApache.com

The post PDF.js appeared first on AskApache.

Discover who’s tracking you online with Collusion

$
0
0

AskApache.com

collusion firefox addonThe collusion add-on for Firefox is super-legit. Just navigate to any website normally, then just click the little collusion icon in the status bar and a full detailed report pops up in a new tab describing all the sites that the current site shared your data with (shared as in connected).

View the full introduction to Collusion on mozilla.org or the Add-on Page.

Also added to my Firefox Add-on Collection: AskApache Web Development (Advanced)

The Collusion Add-on

Collusion (by Jono X, Dethe Elza) is an experimental add-on for Firefox that allows you to see which sites are using third-party cookies to track your movements across the Web. It shows, in real time, how that data creates a spider-web of interaction between companies and other trackers.

If you don't see the Collusion icon in the bottom right corner of your browser, make sure the add-on bar is shown. On Windows, click the Firefox menu button, then click Options, then check "Add-on Bar". On Mac, go to the View menu, then the Toolbars menu, then check "Add-on Bar".

We're working on adding more features, such as being able to upload an anonymized version of your Collusion data so we can build a big picture view of the tracker ecosystem. We're also working on visualizing other methods of tracking besides third-party cookies and improving the visualizations generally. If you would like to keep an eye on our progress, file a bug, request a feature, or get a copy of the source code, please go to https://github.com/mozilla/collusion or check out our wiki page at https://wiki.mozilla.org/Collusion.

Discover who's tracking you online

Collusion is an experimental add-on for Firefox and allows you to see all the third parties that are tracking your movements across the Web. It will show, in real time, how that data creates a spider-web of interaction between companies and other trackers.

View The Demo see how you're being trackedDownload the Collusion add-on for Firefox

collusion firefox addon

Take control of your data

We recognize the importance of transparency and our mission is all about empowering users — both with tools and with information. The Ford Foundation is supporting Mozilla to develop the Collusion add-on so it will enable users to not only see who is tracking them across the Web, but also to turn that tracking off when they want to.

Telling the global tracking story

Your data can be part of the larger story. When we launch the full version of Collusion, it will allow you to opt-in to sharing your anonymous data in a global database of web tracker data. We'll combine all that information and make it available to help researchers, journalists, and others analyze and explain how data is tracked on the web.

Collusion is about choice

Not all tracking is bad. Many services rely on user data to provide relevant content and enhance your online experience. But most tracking happens without users' consent and without their knowledge. That's not okay. It should be you who decides when, how and if you want to be tracked. Collusion will be a powerful tool to help you do that.

Discover who’s tracking you online with Collusion… originally appeared on AskApache.com

The post Discover who’s tracking you online with Collusion appeared first on AskApache.


Help the Free Software Foundation

$
0
0

AskApache.com

Become a Member


Become a member of the Free Software Foundation today to help us reach our goal of $450,000 by January 31st.

Quoted from: Build us up! Free software is a cornerstone of a free society

You guessed it. We're not talking about Santa. The NSA and the world's big Internet and telecommunications companies have built a massive Surveillance Industrial Complex that undermines all our freedoms. We need to build our own infrastructure, one that values freedom, privacy, and security for all people. We need your help to do it.

The Free Software Foundation has been defending computer users' freedoms and privacy for nearly thirty years. No matter the political climate, we have always fought to defend the freedoms of all computer users. Today, in the face of mass surveillance, more people than ever are discovering that free software is a necessary cornerstone of a free society. With this momentum, we can turn our blueprints for a free software future into brick and mortar.

Since day one of the PRISM scandal, the FSF has been sounding the alarm. We've published high-profile op-eds in Wired and Slate, and as members of the Stop Watching Us coalition we marched on Washington to protest mass surveillance. And we're not just talking about the need for change; we're doing something about it. This September, we hosted a hackathon in honor of the GNU System's 30th anniversary, where participants made contributions to a dozen projects that form key building blocks of a surveillance-free future.

All the while, we've continued to build towards many more of the prerequisites for a free society, from working with hardware manufacturers to fighting DRM in HTML5.

With your support, we can do so much more in 2014.

The Free Software Foundation is only as powerful as our membership base; individual donations account for the majority of our funding each year. This has always been a grassroots, community-supported movement, and it always will be. This year, we need to meet our goal of $450,000 to build our vision for the free software movement. You can become a member of the FSF for just $10/month ($5/month for students); when you join, you'll get a variety of benefits, including free admission to our annual conference, LibrePlanet.

Please consider joining as a member to help us meet our fundraising goal by January 31st.

Every dollar you give helps to build us up.

If you believe in our work, please share this appeal with your social networks.


The FSF's campaigns

The FSF's campaigns target important opportunities for free software adoption and development, empower people against specific threats to their freedom, and move us closer to a free society.

Our successes are driven by the efforts of supporters and activists like you all around the world. Please take a moment today to make a contribution, by joining the FSF as an associate member, making a tax-deductible donation and volunteering your time.

Free JavaScript

The Free JavaScript campaign is an ongoing effort to persuade organizations to make their Web sites work without requiring that users run any nonfree software. By convincing influential sites to make the transition, we raise awareness of the need for free software-friendly Web sites and influence the owners of other sites to follow.

Stop DRM in HTML5

Help the Free Software FoundationThe World Wide Web Consortium (W3C) is considering a proposal to weave Digital Restrictions Management (DRM) into HTML5 — in other words, into the very fabric of the Web. Millions of Internet users came together to defeat SOPA/PIPA, but now Big Media moguls are going through non-governmental channels to try to sneak digital restrictions into every interaction we have online. Giants like Netflix, Google, Microsoft, and the BBC are all rallying behind this disastrous proposal, which flies in the face of the W3C's mission to "lead the World Wide Web to its full potential."

Please sign our petition to stop DRM in HTML5.

Secure Boot vs Restricted Boot

Help the Free Software Foundation

When done correctly, "Secure Boot" is designed to protect against malware by preventing computers from loading unauthorized binary programs when booting. In practice, this means that computers implementing it won't boot unauthorized operating systems -- including initially authorized systems that have been modified without being re-approved.

This could be a feature deserving of the name, as long as the user is able to authorize the programs she wants to use, so she can run free software written and modified by herself or people she trusts. However, we are concerned that Microsoft and hardware manufacturers will implement these boot restrictions in a way that will prevent users from booting anything other than Windows. In this case, we are better off calling the technology Restricted Boot, since such a requirement would be a disastrous restriction on computer users and not a security feature at all.


Upgrade from Windows 8

Help the Free Software Foundation

Microsoft has shelled out a mind-boggling estimated $1.8* billion to convince the public that it needs Windows 8. Why the record-breaking marketing deluge? Because a slick ad campaign is Microsoft's best shot at hiding what Windows 8 really is; a faulty product that restricts your freedom, invades your privacy, and controls your data.

Windows 8 comes with plenty of "features" Microsoft won't tell you about. Because Windows 8 is proprietary software, you can't modify Windows 8 or see how it is built, which means Microsoft can use its operating system to exploit users and benefit special interests. Windows 8 also includes software that inspects the contents of your hard drive, and Microsoft claims the right to do this without warning. To make matters worse, Windows 8 also has a contacts cache that experts fear may store sensitive personal data and make users vulnerable to identity theft.

Learn more about our campaign and pledge to upgrade away from Windows at http://www.upgradefromwindows8.com

Surveillance

If we want to defang surveillance programs like PRISM, we need to stop using centralized systems and come together to build an Internet that's decentralized, trustworthy, and free "as in freedom." The good news is that the seeds of such a network are already out there; as we wrote in our statement on PRISM, ethical developers have been working for years on free software social media, communication, publishing, and more.

Check out the surveillance campaign area to get involved with the effort to make the Web safer and from surveillance. There's something to do for people of all experience levels.

Help the Free Software Foundation

Working together for free software

Free software is simply software that respects our freedom — our freedom to learn and understand the software we are using. Free software is designed to free the user from restrictions put in place by proprietary software, and so using free software lets you join a global community of people who are making the political and ethical choice assertion of our rights to learn and to share what we learn with others.

Meet the free software gang

This is a campaign aimed at getting new users into free software.

The GNU Operating System

Help the Free Software FoundationThe GNU operating system is a complete operating system made entirely of free software. Millions of people are using GNU every day to edit their documents, browse the web, play games, and handle their email, or as part of a GNU/Linux system on their home computer. Even people who have never heard of it use GNU everyday, because it powers many of the sites they visit and services they use. Learn more about GNU, and support progress on fully free operating systems by volunteering or donating to the FSF.

DefectiveByDesign.org

Help the Free Software FoundationDigital Restrictions Management (DRM) robs us of control over the technology we use and the culture we live in. DRM and the DMCA can make it illegal to share an article, back-up your kids' favorite DVD, or move your music from one player to another. Since DRM is inherently incompatible with free software, it also excludes free software users from equal participation in culture. DefectiveByDesign.org is our anti-DRM campaign, where we mobilize large vocal communities to reject products from businesses that insist on using to DRM to control their customers. Learn more at DefectiveByDesign.org and the campaign wiki.

PlayOgg

Help the Free Software FoundationThe PlayOgg campaign (playogg.org) promotes the use of free audio and video formats unencumbered by patent restrictions, rather than MP3, QuickTime, Windows Media, and AAC, whose patent problems threaten free software and hinder progress. We also promote the use of the new "video tag" standard as an alternative to Adobe Flash for embedding audio and video in webpages. Find out more about PlayOgg at playogg.org or at the campaign wiki. You can also join the PlayOgg volunteer team to push companies and services to use Ogg by joining the mailing list.

End Software Patents

Help the Free Software FoundationSoftware patents create a legal nightmare for all software developers and pose particular problems for the free software movement. So as the FSF campaigns for formats that are free of software patents, we also work on the more fundamental task of ending software patents entirely, through legal and legislative action. Learn more at EndSoftPatents.org, see the wiki, join the action alert mailing list.

Campaign for OpenDocument

Help the Free Software FoundationOur OpenDocument campaign fights for the use of free formats in government documents, pushing governments to adopt policies requiring that all digital public documents and information be stored and distributed in formats that are standard, open, and royalty-free. OpenDocument Format (ODF) is one such format. Get involved and take action against Microsoft's Open XML.

High Priority Free Software Projects

Help the Free Software FoundationThe FSF's High Priority Projects list and reverse engineering projects list serve to foster the development of projects that are important for increasing the adoption and use of free software and free software operating systems. Some of the most important projects on our list are "replacement projects". These projects are important because they address areas where users are continually seduced into using nonfree software by the lack of an adequate free replacement. These are critical projects that need your help.

LibrePlanet

Help the Free Software Foundation

The FSF is just one part of a massive global movement for free software. Recognizing this, the FSF created LibrePlanet (libreplanet.org), a wiki and community site to help free software users, developers and activists around the world connect and share information and resources. Visit LibrePlanet to create a profile, add your organization, or list your activist project. You can also join the mailing list, the IRC channel, or the identi.ca group.

Campaign for Hardware that Supports Free Software

Help the Free Software FoundationHardware manufacturers are often negligent in offering support for free software. Our hardware directory helps people identify hardware to buy that works with their free software operating system. It is also an important part of the FSF's ongoing work to persuade hardware vendors to respect free software users. For more information on the FSF's plans, read our whitepaper: The road to hardware free from restriction, or see its most recent revisions on its LibrePlanet wiki page.

Free BIOS Campaign

Help the Free Software FoundationOur campaign for a Free BIOS protects your rights by giving you freedom at the lowest level (if the BIOS is not free, manufacturers can use it to interfere with your control over the computer you use, for example). One piece of this campaign is Coreboot, a free software project aimed at replacing the proprietary BIOS (firmware) you can find in most of today's computers. Visit coreboot.org to learn more about the development of Coreboot, supported systems, and how you can get started running a free BIOS. For more, join the Coreboot mailing list. The FSF is also pushing for the creation of a laptop with a free BIOS.

Campaign against the ACTA

The FSF opposes the ACTA (Anti-Counterfeiting Trade Agreement) because it is a threat to the distribution and development of free software, and we campaign against this and other international agreements that undermine people's right to control technology. Learn more about our campaign against the ACTA.

The FSF welcomes volunteers in all of its campaigns. You can keep up with the most important happenings in our campaigns by following our news feed, blogs feed, and the #fsf IRC channel.

Help the Free Software Foundation… originally appeared on AskApache.com

The post Help the Free Software Foundation appeared first on AskApache.

Get Number of Running Proccesses with PHP

$
0
0

AskApache.com

Recently I had to setup a script to curl 10k urls, but it could only do 500 requests at any one time. 501 requests would cause a 503 server error, or even a 500. In order to work under that limit, I created a function that returns the number of currently running processes on the machine in an extremely fast and efficient way.

How it works

Apache Httpd Number Processes

It gets the number of running processes in the most efficient way possible, by doing a simple stat on the /proc directory for the number of hard links. In unix, each hard link in the /proc directory corresponds to a running process. So /proc/1234 would be a hard link for the process id 1234. Since each process has a unique process ID it has a corresponding hard link.

Pros and Cons

The upside to this method is that it is incredibly fast and efficient. The downside is it won't tell you how many of those processes are php, httpd, exim, sshd, etc.

Equivalent Unix Command

This command will give you the same thing from the command line, such as from the Bash shell.

$ stat -c '%h' /proc

Get Process Count

Here is the function. Note it is for php 5.3.0, though if you are running less than that it is no big deal in terms of performance, this baby is super quick.

/** askapache_get_process_count()
 * Returns the number of running processes
 *
 * @since 3.2.1
 * @version 1.3
 *
 * @return string
 */
function askapache_get_process_count() {
 
    // PHP < 5.3.0
    // clearstatcache();
 
    // PHP >= 5.3.0
    clearstatcache( true, '/proc' );
 
    $stat = stat( '/proc' );
    return ( isset( $stat[3] ) ? $stat[3] : 0 );
}

Example - Sleep 5 seconds until processes less than max

This example will continually sleep for 5 seconds until the process count is less than the max_procs.

$max_procs = 200;
$proc_count = askapache_get_process_count();
 
do {
    $proc_count = askapache_get_process_count();
    error_log( "ALERT!! procs > max_procs:  {$proc_count} > {$max_procs}.. SLEEP FOR 5 SECS " );
    sleep( 5 );
} while ( $proc_count > $max_procs );

clearstatcache - Clears file status cache

void clearstatcache ([ bool $clear_realpath_cache = false [, string $filename ]] )

Description

clearstatcache

When you use stat(), lstat(), or any of the other functions listed in the affected functions list (below), PHP caches the information those functions return in order to provide faster performance. However, in certain cases, you may want to clear the cached information. For instance, if the same file is being checked multiple times within a single script, and that file is in danger of being removed or changed during that script's operation, you may elect to clear the status cache. In these cases, you can use the clearstatcache() function to clear the information that PHP caches about a file.

You should also note that PHP doesn't cache information about non-existent files. So, if you call file_exists() on a file that doesn't exist, it will return FALSE until you create the file. If you create the file, it will return TRUE even if you then delete the file. However unlink() clears the cache automatically.

Note:
This function caches information about specific filenames, so you only need to call clearstatcache() if you are performing multiple operations on the same filename and require the information about that particular file to not be cached.

Parameters

clear_realpath_cache
Whether to clear the realpath cache or not.
filename
Clear the realpath and the stat cache for a specific filename only; only used if clear_realpath_cache is TRUE.

Return Values

No value is returned.

stat - Gives information about a file

array stat ( string $filename )

Description

stat

Gathers the statistics of the file named by filename. If filename is a symbolic link, statistics are from the file itself, not the symlink.

lstat() is identical to stat() except it would instead be based off the symlinks status.

Parameters

filename
Path to the file.

Return Values

NumericAssociative (since PHP 4.0.6)Description
0devdevice number
1inoinode number *
2modeinode protection mode
3nlinknumber of links
4uiduserid of owner *
5gidgroupid of owner *
6rdevdevice type, if inode device
7sizesize in bytes
8atimetime of last access (Unix timestamp)
9mtimetime of last modification (Unix timestamp)
10ctimetime of last inode change (Unix timestamp)
11blksizeblocksize of filesystem IO **
12blocksnumber of 512-byte blocks allocated **

* On Windows this will always be 0.
** Only valid on systems supporting the st_blksize type - other systems (e.g. Windows) return -1.

In case of error, stat() returns FALSE.

Note:
Because PHP's integer type is signed and many platforms use 32bit integers, some filesystem functions may return unexpected results for files which are larger than 2GB.

Get Number of Running Proccesses with PHP… originally appeared on AskApache.com

The post Get Number of Running Proccesses with PHP appeared first on AskApache.

Boosting Googles PageSpeed Module with TMPFS

$
0
0

AskApache.com

Boosting Googles PageSpeed Module with TMPFS
Google's mod_pagespeed speeds up your site and reduces page load time. This open-source Apache HTTP server module automatically applies web performance best practices to pages, and associated assets (CSS, JavaScript, images), all transparently like a Squid Proxy.

With TMPFS you can dramatically improve the speed of mod_pagespeed and the webpages served by it. TMPFS will store/serve the optimized PageSpeed output directly from RAM!

Super-Boosting with TMPFS

The PageSpeed module in a nutshell applies several optimizations to the output sent by your server before it is sent to the client's browser.

 [ Server Outputs ] ===> [ PageSpeed Module Optimizes Output ] ===>  [ Clients Browser Receives Optimized Output ]

The optimizations are quite impressive indeed, including optimizing images, HTML, CSS/JS, whitespace-minification, etc. However, since maybe 100 visitors to your site may request the same page at the same time, it would be inefficient for the PageSpeed optimization to have to run the same opimizations on the same outputs for every visitor. So, the PageSpeed optimizations are saved to the server disk in a temporary location. That way, the first visitor is the only request that requires PageSpeed to run all the CPU-intensive tasks like optimizing images, and those optimized images are then saved to disk, so that the next 99 visitors will be served directly from the disk and will not have to do all the work over again for each request and visitor.

The downside to this is obvious to anyone with a solid grasp of system-performance.. Disk I/O! The disk can only do so much activity at a time, and it has physical limits to the speed and amount of I/O it can do at any one time. Meaning that if you had 1000 visitors all accessing the same PageSpeed-optimized image file from disk on each request, the Disk I/O would become a serious bottleneck (though still much much faster than having to do the optimizations each time for each request).

TMPFS to the Rescue

A Disk just stores binary 0's and 1's, and likewise RAM does the same. The difference is that RAM is at least 30x faster. This is why Google's internal systems running google.com searches are all RAM-based. They also use RAM-based filesystems to store all the pagerank/linking data which makes parsing and computing that data way faster.

In order to save, retrieve, locate, and modify data on a Disk, you need a filesystem. Windows uses a lot of pathetic filesystems such as NTFS. Linux OTOH, uses cutting-edge filesystems such as ext4, ZFS, XFS, ReiserFS, and of course, GoogleFS.

Unlike Disks, RAM does not retain the 0's and 1's when the power goes off, so having a filesystem on top of RAM didn't make sense... The GNU/Linux developers went for it anyway and several RAM-based filesystems were created. The easiest and best is called TMPFS.

TMPFS lets you save, retrieve, locate, and modify the 0's and 1's on a RAM device in the exact same way you use a hard-drive.. Only, you wouldn't use TMPFS to store things since a reboot or power-outage will always clear RAM.

TMPFS Example: favicon.ico

One of the easiest illustrations of how tmpfs can act like a 30x super-charger is this. Most browsers automatically request a sites favicon.ico file. If you had 10k visitors requesting the favicon.ico file at the same time it could cause a Disk I/O bottleneck. So what you could do instead is create a TMPFS in Ram and put the favicon.ico file there instead, then those 10k visitors requesting it at the same time would be served it directly from RAM!

PageSpeed + TMPFS

So the idea here is to setup PageSpeed so that all the optimized images, white-space minified files, combined js files, etc.. are saved not to the slow Disk but are saved to the fast TMPFS. Then all of those optimized files are served directly from RAM! This not only improves the speed at which visitors receive the files, but also allows many many more visitors to be served at the same time. And it also improves the speed at which the PageSpeed module can generate, create, and update those optimized files.

Prepare the TMPFS

First thing to do is shutdown apache/nginx. Then you will prepare and build the tmpfs filesystem.

Create tmpfs directory

# mkdir -pv /tmp/pgsp

Create Mount option in /etc/fstab

Get the uid and gid first: # id apache, which gives me:

uid=38(apache) gid=38(apache)

Now add this line to your /etc/fstab file.

rw,gid=38,uid=38,size=200m,mode=0775,noatime

Test the Mount

Finally, make sure it mounts automatically

# mount -a

Then check that it is listed in the output of mount

# mount
tmpfs on /tmp/pgsp type tmpfs (rw,noatime,gid=38,uid=38,size=200m,mode=0775)

Configure Apache/Nginx Pagespeed Module

Now that the tmpfs is all setup you just need to setup the Pagespeed Module to use it. Of course you will first need to install the module if you don't already have it.

Install and Enable the mod_pagespeed Apache/Nginx Module

Download and install

Configure mod_pagespeed to use the tmpfs

In the Apache/Nginx configuration file provided by mod_pagespeed, such as /etc/httpd/conf.d/pagespeed.conf, you need to set the directive ModPagespeedFileCachePath to the location of the tmpfs filesystem.

ModPagespeedFileCachePath    "/tmp/pgsp/"

Official PageSpeed Docs

Boosting Googles PageSpeed Module with TMPFS… originally appeared on AskApache.com

The post Boosting Googles PageSpeed Module with TMPFS appeared first on AskApache.

Viewing all 58 articles
Browse latest View live