Missed deal hints new plans-Google for music

google-audio-music-logo“ It seems that the Cloud technology opening its wings one by one, Google’s try to acquire one silicon valley startup La La Media Inc. even after loosing the turf to Apple kind of made a clear announcement in a way, that what might be the next milestone for Google: A Full fledge service like Google Books to cater music only for free (running over ads). ”

It might be one of the good guess only, but Google has already launched such a service in China with partnering with 110.cn to offer legal MP3 streaming/ download with a large MP3 collection and this shows that its very likely that after getting right deals (like it got in case of Google books with various publishers), Google might move into music as well like iTune.


If you think that what one uses while using computers, except internet, chatting and searching web, then its editing photos, documents, presentations, watching movies and hearing music etc. Google had already covered many of the points like Picasa for Photos, Google Docs for Documents/ Presentations and YouTube for watching media. What just remains is Music.

Although there is a YouTube category for music videos, a very limited music search engine, an online music player and a media server for Google Desktop, but currently Google doesn’t has any full-fledged music related service product.


With Google Chrome, Google wants to start an era, where your every need will be online and will go along with you, no matter where you are or what you are using for your connection, your desktop, your docs, your bookmarks, your web history, your videos, your books and everything else will follow you.

So before the launch of Google Chrome OS, we could sure expect Google to fill up this gap by acquiring some company in similar domain and in next probably its gonna to be Google online games service. Sure, due to copyrights, its not gonna to be as open and simple like Picasa webs but you may sure hope for products having capabilities like Google Books (limited preview/ full).

What’s your opinion? please write to me.

Google making its search result Real-time now

“ In today’s running world, many a times, the delay in reaching to the information by a fraction of minutes could cause you loose the gears. After Bing and Google both partnered with Twitter and Facebook, the world was waiting for the concept of real time search to appear in ‘real’ and here Google finally joining the race again (with in-depth preparations), by moving into real time search while Bing was already started with it by Twitter Searches since July 01, 2009. ”

googlelogo_thumb1 bing-logo


As the phenomenon of Twitter risen up, it given strength to concept of real time information search. In early days, any news before getting pitched up in print or television media was hitting the web first via blogs with sometimes more details than other media, but even then there was a delay of hours/ days in getting the information live. Now, the birth of Twitter changed the scenario and now days, whatever happens around the globe, first hits Twitter, Facebook and Orkut updates first then anything else and people were looking for information there than any of news results or blogs.

The real time information search for headlines is moving towards Twitter, Facebook and Orkut updates now days and it’s been inviting the searching giant to move into the race sooner or later. Microsoft became the first to hit the chord by Partnering with Twitter and Facebook by Oct 22, 2009, while Google announced on Dec 07, 2009 its collaboration with Facebook, MySpace, FriendFeed, Jaiku, Identi.ca and most importantly with Twitter.


So, you might be getting the idea that what the real time search is; its the concept of searching for and finding information online as it is produced or in other words it’s the advancements in web search technology coupled with growing use of social media enabled online activities to be queried as they occur.


Let us know that how to use it:

Google’s Real Time Search

For reaching to Google’s Real Time Search, You have to go to normal Google Search, after searching some query press Show Options button (its there at left top of the search results) and then press latest as shown in image given below. You could see results are coming up with time given next to them in seconds, minutes. 🙂

google-bing-search_original1

Note that as per Google’s usual practice in past, this feature is release as a beta feature only and suggestions are always invited to make more improvements in it.


Bing’s Real Time Search

For search through Bing, you need to go on http://www.bing.com/twitter and then you could make your search within Twitter, while I am not able to figure out yet that how its working with Facebook.

bing real time search


Now, there are reasons that why Google Real Time Search is better than Microsoft’s Bing (courtesy CRN) and even if its declared as war between the two, many of us already have declared Google a Winner.

“ Not only Google has a much better and proven search technology than Bing could have, but also Google has designed its search results more better in visuals than Bing and as a final reason, you can see that sure Microsoft hit the chord first by integrating with Facebook and Twitter, but it couldn’t match the breadth and depth Google’s real time search is going to provide by partnering with wide and diverse range of Social Network Sites. ”

So, finally we sure has many reasons to look forward that how much more innovation is about to come from this rivalry between Microsoft and Google.

Now LinkedIn got the makeover for better

As technology and ajax changing most of the sites for better, after crossing the legendary 50 billion+ users mark, Business Social Networking Giant LinkedIn also made the same move as reflected through their blog post on Dec 09, 2009. Yes! LinkedIn’s new face got live from Dec 09, 2009 to make its appearance more smooth and more productive.

home 


It seems that December is not the busy month for only Google, but LinkedIn been in news for keeping its growth rate at 5.68%, while the same is flattening now days with Facebook and Twitter, in fact, last billion took only 12 days to be added, which is sure to show that how its getting pace into organizations and businesses.

To keep up with the pace, LinkedIn also moved into some deals like integration with twitter and outlook, launching its developer platform and now this makeover. It may be that after logging into LinkedIn from December 09, many of the users wont be surprised as changes are already rolling out since Nov 06, 2009 with a portion of users, Dec 09, 2009 is special as from today, LinkedIn officially rolling the same interface for all the LinkedIn users.


For rest of the details for UI, you may check the official announcement, while I am putting some screenshots to give you a feel about the change (although you already might have got the taste since last few weeks, if been in those lucky users).

linkedin-profile

linked in profile

linkedin groups

linked in searcgh  linkedin global

Offline Gmail – Are you still not using it?

“ Have you ever been tired of switching to html mode in Gmail, due to your slow internet connection? Have you ever been waited for opening some mail in feature rich ajax based standard version  of Gmail? Have you ever been wished that you would have some email client in same way as Gmail is in its web version? Here is the solution from Gmail side; Offline Feature.

“ Officially introduced in January 27, 2009 as an experimental Gmail Labs Offline Features, came up to change the capabilities of web-based mail, is now a regular part of Gmail from December 07, 2009. ”

P.S.: its for good of your personal PC only, not any public one like Office or cafe. 

Google_Labs_logo gmail-logo


Google is already making itself as a center of whole Internet World and making the world ready for Cloud Computing. Look like now in place of choosing to develop some email client of its own, and making many of users preferring web version of Gmail over any email client, now has came up with solution of enabling Gmail to work in offline mode. Its expected to work in the same way as their other service Google Reader is already working for Offline mode from long (since May 31, 2007).

gears

Web-based emails are great as one could access them from anywhere, but catch is; it’s limited by the internet connection itself. So, Gmail provided the solution to cache the mails to local PC through Gears. As long you are connected to Internet, the local cache keeps itself in sync with Gmail’s Servers and when you loose your connection, it automatically switches to Offline mode and uses the data stored on your computer’s hard rive instead of the information set across the network.


“ With Offline mode, you can still read messages, star and label them and do most of the things you’re used to doing while reading your webmail online and that too with blazing fast performance (I felt it even more faster than normal Outlook Express or Windows Live Mail, just touch a mail to go). Any messages you send while offline will be placed in your outbox and automatically sent the next time Gmail detects a connection. ”

Not only this but with now this matured feature, two more in-demand options has been added to Offline Gmail: an option to choose which messages get downloaded for offline use and the ability to send attachments while offline.

“ Now, it may even replace Outlook express (or any other mail client) for a few users so that they could keep on using Gmail Offline all day long and could connect to internet once or twice a day. ”


So, what are you waiting for? Start with Offline Gmail. First of all, you would be needed to keep Google Gears ON and obviously the browser you might be using must supports Gears (like Internet Explorer 7.0+, Firefox 2.0+, Safari 3.0+ and Google Chrome) and then follow the instructions as given below:

  1. Click the “Settings”  link in the top-right corner of Gmail.
  2. Click the “Offline” tab.
  3. Select “Enable Offline Mail for this computer”.
  4. Click “Save Changes” and follow the directions from here.

After the browser reloads, you’ll see a new “Offline” link in the upper right hand corner of your account, next to your username. Click this link to start the offline setup process and download Gears, if you don’t already use it.

I am also putting some screen shots to let you know it more closely.

Installing Offline Access

Google Gears will show a Security Warning

Creating a desktop shortcut for Access Gmail



With all the excitements out there, I would also like to remind the differences between normal Email Clients like Outlook express, Thunderbird, Windows Live Mail etc and web-based Offline Solution of Gmail.

First of all, I am not yet sure about procedures related backing up and exporting data of Google Gears for we like administrators that whether they even exists or not. With a few known issues, Offline Gmail still doesn’t support accessing Contact Manager (although auto-complete feature works), Complete Search Results (for obvious reasons), Access to conversations in Spam and Trash (as they were considered to be less required) and some Gmail Labs features.


Also those users, who might have using Gmail in https mode always, will feel many glitches with Offline Gmail (although I do think that its not a product for that much security conscious users and I recommend it for home users and personal laptops only)

But still for most of the Gmail Users Community, its a very welcome feature for already feature-rich Gmail. I am sure that many would like to use it.


Do you know, many other things could also be used for Offline through Google Gears and other services? Yeah you can use any of website  for reading it offline. Check webnol’s Article.


photo of Nitish KumarNitish Kumar

Google Dictionary – some announcements come silently

“ Dec 4, 2009, Google very much unlike to its other product offerings,  ‘silently’ added one more service is its service catalog named Google Dictionary with a very clean and simple yet powerful interface offering meaning of words from 28 different languages and also translate terms from or into English. May be it was not announced as it was already part of Google Translate and now it just came out to be an independent product.

Interesting na? Once a misspelled word Google (actual spelling was Googol) first got included in Oxford English Dictionary in July 2006 and now coming up with its own dictionary. 🙂

Google Dictionary


Now, something about Google Dictionary; even if you might not have started using Google Translate, have you ever noticed that if your query includes a single word or it’s an expression, Google links to the definition in the blue bar that mentions the number of results. Check the pics attached for reference.

Google dic


Google used to offer “define:word” parameter to get definitions, and now the dictionary as an independent product, is the natural extension of that.

Its a welcome move for normal users, who are addicted to Google and wish to straight answers for their definition searches with synonyms, but advanced users might be missing answers.com as earlier Google was picking definitions from there, which was extracting information from encyclopedias and aggregating information from various sources.

Google Dictionary displays synonyms, definitions, related phrases for the word and also web definitions from other places. You can star the words you searched using using the star before the word. You can later see starred words for your reference.


Ohh!!! are you asking that why to use Google Dictionary or how to use it? Just think once have you ever needed to press any help button for using any newly launched Google, like products from other companies like Microsoft 😛

photo of Nitish KumarNitish Kumar
 

Change to Google Public DNS – it rocks

Dec 03, 2009 – As an effort to make web browsing more better, the internet Giant Google is coming with its brand new service Google Public DNS: An experimental Public DNS Resolver. Thanks a lot to Devil’s Workshop and the article of Aditya Kane to get me introduced to the news today on Dec 06, 2009. I really think that I should take some time out of my schedules and should increase my awareness a little more.

google-logo


DNS for sure plays a major part of you browsing experience as its kind of an Internet Phone Book or an Address Book, which makes you reach to the exact page you are looking for once you typed or clicked a URL. Obviously, as much faster you can find the exact number (IP Address here), you could dial or as much faster, you could find the address from address book, you can reach to the address. So, sure its a great move on basis of the level of services that Google has provided yet.

If an ISP’s DNS is down or facing issue, then it doesn’t get that much attention as much it will if Google’s go down. Its under high scrutiny once announced. So, sure we could rely on it on the basis of our past experiences.


Its not been the first attempt to bring any Public DNS Service (Non-ISP) into play. OpenDNS, Level3, Scrubit and many more players are already out there, which are providing faster DNS resolution than your ISP itself and you could test yourself for fastest DNS Servers around you using the following link. But when you get a name like Google in the race, then you get assured that you web experience is going to better and better due to the level of quality and competition it will bring in. 

Personally, I have not used DNS Servers other than my ISPs or other than OpenDNS, so I could compare this service with these two only, but my experiences always found OpenDNS better than any of the ISPs. Till now, whatever I got to test, I found Google DNS is outperforming OpenDNS and Level3 in all the way, check the technical reference


Results shows that we have a very very good reason to switch over Google Public DNS (8.8.8.8, 8.8.4.4), but what might be concerning many of the people is the motive behind it. Why Google brining it in? Will this not gonna to enable Google to keep more close check on everyday users internet habits and so improving its Ads system more by that? Will this bring power to Google to show Ads on even mistyped URL pages, which were in hands of ISPs only till now? Is not many privacy concerns out there?

Google has partially dispelled both of the concerns saying that its not gonna to store the data tied to particular user for long and will save the data only if agreed from the users itself and that too will be only anonymized ones.


opendns_logo_300 level3_logo

Personally, I am not much concerned about these privacy things as already Internet Explorer, Firefox, Chrome or Google Searches (when we logged in with Gmail) are gathering all of our internet habits, whether a normal user knows or don’t know about it. But what I am really looking for, is that “When Google is going to provide us the control over this Public DNS System like the way OpenDNS does?”. With OpenDNS free account, you can customize your redirection search page with your own logo etc, could watch over internet habits of your users yourself, have your own filtering system or typo correction. Once that will be done, Google Public DNS is sure gonna to outrun OpenDNS and others.

So, why waiting for, start using Google Public DNS for a better web browsing.

photo of Nitish KumarNitish Kumar

Making Squid Server from Scratch: The Dummies Manual

Most of us should have heard of Squid, mostly while discussing requirements of restricting Internet Usages among clients. Although a requirement for Squid may arise for any few of the following reasons or anything else:

1- To limit bandwidth usages: Squid optimizes data flow between client and server to improve performance and caches frequently-used content to save bandwidth (As data is being accessed locally not through ISP for further requests).

Moreover, Organizations might have limited bandwidth or expensive over some threshold value, so management cannot permit employees to download inappropriate material as it usages precious bandwidth (there are even options to limit the download size through Squid Server, which might be handy for such a scenario).

2- Due to Organizational Policy: Sometimes, organizations might have very strict internet policies regarding offensive materials. For this and for other reasons like controlling distractions, they don’t want their employees gaining access to inappropriate sites.

3- To limit usages as per defined hours: Sometimes, organizations might need to provide internet access to employees during certain working days/ hours only.

4- Monitoring site access patterns: Sometimes, in place of restricting or in addition of restricting internet access, the purpose might be monitoring the usages patterns for further steps to optimize or restrict.

Most special point about Squid is its being open source and vast availability of information and tweaks through forums and blogs. That’s why it’s most preferable solution for any such scenario.


Here I am providing the Step By Step Dummies Manual for implementing a Squid Proxy Server for layman like me, which should be sure helpful for many of us (including myself).

Step-by-Step with the implementation:

1- Base Machine: For my deployment, I chosen CentOS as the Linux installation due to availability and reliability of update sources for the OS itself (its replica OS to Redhat Enterprise versions with almost all features). The Configuration for the machine was 2.66 GHz Core 2 Duo Processor, 1 GB RAM and 160 GB HDD space.

Installation was customized to have 2 GB swap partition, 200 MB boot partition, Squid package checked, Web Server packages checked, SendMail related packages (Squid may be configured to send reports on mail), MySQL/ PHP packages checked (not required for Squid itself, but might be required for reporting software’s later on).

2- Setting Up the services: We need just one service specially Squid, but I will recommend to keep the same server up as an Apache Web Server as well, so that could customize Squid Error Messages with pics or logos.

Here is the basic way:

# chkconfig squid on
# chkconfig httpd on

The above commands will set up the services squid and httpd ON on startup. For later dealing with Squid Service, you can always use the following commands:

# /etc/init.d/squid start
# /etc/init.d/squid stop
# /etc/init.d/squid restart

Although I will come up with firewall and iptables stuff at the later part of this manual itself (as integrating squid and iptables is kind of necessary for any production environment), but for people, who wish to keep them minimal with squid, here is what minimum needed to do with firewall. First check whether port 3128 is opened or not

# netstat –tulpn | grep 3128

If not, then next part would be

# vi /etc/sysconfig/iptables

And append the following line to open up the port 3128 for squid:

-A RH-Firewall-1-INPUT -m state --state NEW,ESTABLISHED,RELATED - m tcp -p tcp --dport 3128 -j ACCEPT

And finally, restart of iptables service (Firewall service)

# /etc/init.d/iptables restart

3- Configuring Squid: Till here, you got Squid services are up and running and now the next and major part remaining is setting up configurations, defining ACLs and setting Access Groups for getting a basic squid configuration running. Except creating a few files for storing domain names to allow/ deny or to store keywords to deny, now most of the part has to be done by editing Squid configuration file squid.conf

# vi /etc/squid/squid.conf

The starting step of playing with squid.conf is setting a hostname for Squid, which is essential for its working. Need to find out visibal_hostname and setting it by putting a name.

visible_hostname squidproxy

Now, first we need to understand the basic requirements and then have to design a policy according to that. So, what your general requirements might be?

1- You may require groups of IP Addresses (different sets), which will have customized web access per requirements/ policy.

2- You may require that few groups might be restricted to only few mentioned sites, few groups might require access for most of the sites (even not documented ones) and few inappropriate ones blocked either domain-based or keywordbased.

3- You may require set of user names/ passwords to access the web along with rules including the above two. (I am not taking this specific one as my case for simplicity reasons).

Although there are numerous Use-Case-Scenario for Squid, but I guess the above ones cover most of the corporate scenarios for basic security administration. So, I am starting with this.


For documentation and readability purpose, you need to name/ remember the various requirement groups first like.IT, Management, Team1, Team2 etc. and then we will proceed further to configure policy for each of the group.

Rest all is about Access Control List definitions. One can limit user’s ability to browse the internet through ACLs. Each ACL defines a particular type of activity, such as an access time or source network, then all ACL statements are linked to http_access statement that tells squid that whether or not to deny or allow the traffic that matches particular ACL.

Squid matches each web access request it receives by checking the http_access list from top to bottom. If it finds a match, it enforces allow or deny statement and stop reading further (that’s why you need to be careful not to put a deny statement above similar allow statement).

Note: The last http_access statement denies all access that’s why we need to keep all of our customization above the same line.

Making Internet Access Policy: First set of rules (template): First you need to start from Access Controls section. At first you need to name a group of IP Addresses and then have to define ACLs for domain-based/ keyword-based site access blocking. I am taking the case of IT Support Web Access, where we need to block a selected list of sites and have to keep rest of the web opened. Although format is given in squid.conf itself, but I am putting the format here as well. There might be two ways to define the address range as given below:

# acl aclname src ip-address/netmask or # acl aclname src addr1-addr2/netmask

In next step, it’s better to keep everything allowed/ denied network, denied sites, denied keywords, so that later updating could be done without touching the squid.conf itself, moreover, backing up configuration would involve backing up those files and squid.conf itself that would be much cleaner and readable than usually squid.conf ended up to be.

Here I am taking first case of management network (just an example for use case).

Requirement is, we have to allow some specific IPs to access internet, some specific sites like orkut, facebook etc might be needed to be blocked, some specific keywords like port, xxx might be needed to be blocked and even you might have some machines in the same IP range that should not be given any internet access at all.

The following snip-set of configuration shows how to do it (acl names itself enough to explain).

# ACLs to define Management Network 
#——————————————————- 
acl management_network src "/usr/local/etc/squid/management/management_network" 
acl management_deny_network src "/usr/local/etc/squid/management/management_deny_network" 
acl management_deny_sites dstdomain "/usr/local/etc/squid/management/management_deny_sites" 
acl management_deny_keywords url_regex -i "/usr/local/etc/squid/management/management_deny_keywords"
#——————————————————-

Now, the next and final set of configuration entries would be selected domains and keywords denying first and then allowing rest of the web (squid scans top to bottom).

# Allow/deny web access to Management Network 
#——————————————————- 
http_access deny management_deny_network 
http_access deny management_deny_sites 
http_access deny management_deny_keywords 
http_access allow management_network 
#——————————————————-

Now, most importantly, you need to create these files at respective locations and putting required entries in them.

The profit for this approach is, any newbie could maintain the squid as usual maintenance works asks for adding/ removing IPs and adding/ removing sites and keywords for denying. It will save squid.conf from being messed up again and again by simple requirements, moreover, will keep it clean and readable.

In this way, all the files would be kept outside squid directory for keeping other IT staff not messing with actual squid.conf itself in case of any short term requirement. Now, there is a folder /usr/local/etc/squid and I’ll make folders inside this folder with the names of access groups as required (like in above case, I made a folder named management).

management_network will keep IP addresses to allow. Syntax might be one IP in each line or range like 172.16.1.25-172.16.1.50 or 172.16.11.0/24

management_deny_network will keep IP addresses that should not get any internet access.

management_deny_sites will keep domains to be denied (one domain in each line)

management_deny_keywords will keep keywords, which if are contained in any url then the whole URL should be blocked (like xxx).

More Restrictive Policy for another group of IPs: Second set of rules: Now, consider a requirement, where you have to allow only provided set of domains/ websites and have to restrict rest of the web access i.e. just company mail site/ website.

Again, you will be needed to pick another range of IP addresses and then defining the rules in following way (on the above pattern). Say the network would be MIS network:

# Permission set defined for MIS Network – Nitish Kumar
# —————————————————————
acl mis_network src "/usr/local/etc/squid/mis/mis_network" 
acl mis_deny_network src "/usr/local/etc/squid/mis/mis_deny_network" 
acl misGoodSites dstdomain "/usr/local/etc/squid/mis/misGoodSites"
# —————————————————————

Now, the next and final set of configuration entries would be selected domains and keywords denying first and then allowing rest of the web (squid scans top to bottom).

# Defining web access for MIS Network – Nitish Kumar

# ———————————————————-
http_access deny mis_deny_network
http_access allow mis_network misGoodSites

http_access deny mis_network

# ———————————————————-

Explanation for file names are similar as was in last case. Here misGoodSites file contain the names of those domains, which will be allowed and rest all will be restricted.

In this way, the second kind of requirement is done to restrict the web access in aggressive way, where only intimated sites would be allowed.

Note: In this scenario, you would be receiving request about site not opening in proper manners and of skipping frames/ pics etc. The reason of such issues would be third party domain embedded in the domains we allowed. So, obviously, the frames and pics are being blocked as they are from not mentioned domain. In such a case, you need to find out these third party domains and allowing them in Good site list.

So, here is the simplistic configuration for squid. There might be many use cases and many on-the-fly custom issues as per scenario, which could be worked out easily on the basis of extensive support provided through blogs and forums all over the web.

Rest part of the Squid Management belongs to Internet Connection and Log Management. If Internet Connection is working over Squid server, then it should work over client after configuring proxy configuration IP/PORT in internet options.

As about directories and logs, then cache directory location is /var/spool/squid and log directory location is /var/log/squid and the important log files, while will be needed to be managed later on are store.log, access.log, users.log and cache.log Note that squid can handle maximum size of a log file as 2GB only and after the same squid service will be terminated, so have to take care of that. Although fortunately, logrotate program automatically takes care of purging the data.

Now, with the above part anybody could easily configure a working Proxy Server and happily live with it later on more easier than other squid configuration manuals suggest.

For people asking for more, here are a few more tips and recommendations

Blocking MSN/ Yahoo/ Gtalk Messengers

Sure, most of you will come across such a requirement and trouble with that is leading messenger know that they would face proxy at some places so they already come with ways to bypass the proxy itself, which makes the job a bit difficult. Here is how to accomplish the same task.

First define the list of IP addresses that some smart messengers like MSN or Yahoo could use (like 64.4.13.0/24 , 207.46.104.0/24). The below section will go to network definition section.

acl bannedips dst "/usr/local/etc/squid/bannedip"

Now, how to use the rules to block messenger traffic

# No Messenger
# ———————————————————-
acl stopmsn req_mime_type ^application/x-msn-messenger$
acl msngw url_regex -i gateway.dll
http_access deny stopmsn
http_access deny msngw
http_access deny bannedips
# ———————————————————-

No Cache for selected sites in Squid

Caching is good for sites with mostly static content, but it could create lots of session related troubles around sites with more dynamic contents and it might be a better option to choose not caching any data for a particular set of sites. Here is how to implement it:

# Defining list to preventing caching for sites
# ——————————————————————-
acl prevent_cache dstdomain "/usr/local/etc/squid/No_Cache_Sites"
acl prevent_cache_file url_regex -i "/usr/local/etc/squid/No_Cache_Ext"
# ——————————————————————-

The above part needs to put, where network ranges are defined (above other custom rules) and the below part has to be placed where rest of http_access statements are placed (above other custom rules):

# Preventing caching for particular sites
# ———————————————————-
no_cache deny prevent_cache
no_cache deny prevent_cache_file
# ———————————————————-

And now we need to put the domains, which needs not to be cached in No_Cache_Sites file and File extensions not to be cached in No_Cache_Ext file and Squid server will stop caching for mentioned domain/ file extensions  after restarting the Squid.

Need pics/ logo in squid error messages?

What if you wish to customize the error message screen you get from Squid? Sure, you have to reach the error file named ERA_ACCESS_DENIED somewhere in /usr/share/…. and then have to edit with normal HTML. Lots of things could be done with this, but what many people wish to do first, is trying to put some gif or logo in the same error message.

Although I don’t favour putting images in error message as it make it a little heavier than originally it is, but here is the work-around.

Putting the image in same directory as ERA_ACCESS_DENIED file doesn’t work and what you require is making Squid itself a Web Server (that’s why I suggested to keep an installation of Apache over same server) and then referencing the image required through some web-path of the same Squid Server. Also notice that you also needs to allow Squid Server Access to all those PCs, where this error message is expected to appear otherwise, you will get error page without any gif or pics over it.

All Network range could be allowed to access Squid server in the following way

# Permission set defined for Complete Network
# ————————————————————-
acl all_network src 172.16.0.0/16
acl GoodSites url_regex -i "/usr/local/etc/squid/GoodSites"
# ————————————————————-

And as per convention, I followed throughout, the above lines will go around section for ACLs defining Network range and the lines given below will go along with rest of http_access statements.

# Defining web access for All Network
# ———————————————————-
http_access allow all_network GoodSites
# ———————————————————-

Outlook and Squid Solved: Requirement of iptables (Firewall)

Why my Outlook not working behind Squid?
How can we use Outlook express or any other mail client behind Squid?
Squid running fine and filtering traffic for http access, but how to use SMTP/POP3 with Squid?

It’s very easy to find people coming up with such queries. I wish to make a clear statement here “Squid has nothing to do with Outlook or SMTP/ POP3 access”. Squid is nothing but a HTTP proxy, which could intercept requests coming over http ports only, not these POP3/SMTP ports.

Disappointed? Don’t be.

Even if it’s not the case of Squid, you could make use of iptables (In built Linux Firewall), which will not only solve the above issue, but will add up more security for your squid.

What is needed to be done with iptables is as given below:

1. First of all, the Linux Box should act as a router to forward all requests coming on port 25 and 100 to outside means IP forwarding required.

2. In next part, as IP forwarding is enabled and any request coming to Box, is going outside, so all ports needs to be secure and controlled.

3. Need to redirect all requests coming to port 80 to port 3128, where squid rules will govern internet access.

4. Need to allow only required ports open on Squid (like 22, 3128, 25, 110, 995, 467).

5. Could be defined that which workstations could be able to make use SMTP/ POP3 through same server.

6. Could be defined that only a few workstations could be able to do ssh to Squid server.

For allowing SMTP/ POP3 connections, your Linux Box (Squid Installation) needs to act as a gateway, which will be entered in Default Gateway entry of client PC. For doing so, one needs to enable IP Forwarding on the same.

It’s disabled by default. For checking the same, you may type the following:

cat /proc/sys/net/ipv4/ip_forward

If output is 1, then nothing to do and if output is 0, then it needs to be ON.

For permanently putting IP Forwarding as ON, you need to change the value of net.ipv4.ip_forward to 1 from 0 in the file

/etc/sysctl.conf. The changes could take affect by either a reboot or by the command

sysctl –p /etc/sysctl.conf

Once you have enabled it, the immediate step is to redirect all traffic of port 80 to port 3128, securing other ports, allowing required ports, allowing ICMP ping, allowing ssh etc. Edit /etc/sysconfig/iptables file and put the following in that.

*nat
: PREROUTING ACCEPT [631:109032]
: POSTROUTING ACCEPT [276:26246]
:OUTPUT ACCEPT [276:26246]
-A PREROUTING -i eth0 -p tcp -m tcp –dport 80 -j REDIRECT –to-ports 3128
-A PREROUTING -i eth0 -p tcp -m tcp –dport 80 -j REDIRECT –to-ports 3128
COMMIT
*filter
:INPUT DROP [490:62558]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [10914:7678585]
-A INPUT -m state –state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -i eth0 -p tcp -m tcp –dport 22 -j ACCEPT
-A INPUT -i eth0 -p tcp -m tcp –dport 3128 -j ACCEPT
-A INPUT -i eth0 -p tcp -m tcp –dport 25 -j ACCEPT
-A INPUT -i eth0 -p tcp -m tcp –dport 110 -j ACCEPT
-A INPUT -i eth0 -p udp -m udp –dport 25 -j ACCEPT
-A INPUT -i eth0 -p udp -m udp –dport 110 -j ACCEPT
-A INPUT -d 172.16.8.10 -p tcp -m tcp –sport 1024:65535 –dport 80 -m state –state NEW,ESTABLISHED -j ACCEPT
-A INPUT -i eth0 -p tcp -m tcp –dport 10051 -j ACCEPT
-A INPUT -i eth0 -p tcp -m tcp –dport 10050 -j ACCEPT
-A INPUT -d 172.16.8.10 -p icmp -m icmp –icmp-type 8 -m state –state NEW,RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -s 172.16.8.10 -p tcp -m tcp –sport 80 –dport 1024:65535 -m state –state ESTABLISHED -j ACCEPT
-A OUTPUT -s 172.16.8.10 -p icmp -m icmp –icmp-type 0 -m state –state RELATED,ESTABLISHED -j ACCEPT

COMMIT

In the above, I have enabled ports 22, 25, 110, 10051, 10050 (zabbix), also have allowed ICMP ping and web server (as I will use SARG for reporting of Squid Access) for all.

Now, after this, if you use Squid Server’s IP Address as Default Gateway, then you will be governed by all Squid rules (without putting Squid’s IP Address in proxy setting) and also would be able to sent-receive emails in Outlook (Note that currently, everyone is allowed over port 110, port 22 for all sites).

Task: Enable or allow ICMP ping incoming client request

For people looking for enabling ICMP ping only, use following three command in order.

Rule to enable ICMP ping incoming client request (Assuming that default iptables policy is to drop all INPUT and OUTPUT packets)

SERVER_IP="IP_Address"
iptables -A INPUT -p icmp –icmp-type 8 -s 0/0 -d $SERVER_IP -m state –state NEW,ESTABLISHED,RELATED -j ACCEPT
iptables -A OUTPUT -p icmp –icmp-type 0 -s $SERVER_IP -d 0/0 -m state –state ESTABLISHED,RELATED

Task: Allow SSH from given IP Addresses only

Rule to allow SSH from one given IP Address only (Assuming that default iptables policy is to drop all INPUT and OUTPUT packets on SSH port)

Although there are many other ways to do it, but I am putting the iptables way here

iptables -A INPUT -p tcp -m state –state NEW,ESTABLISHED -s
172.16.12.0/24 –dport 22 -j ACCEPT
iptables -A OUTPUT -p tcp -m state –state NEW,ESTABLISHED -d
172.16.12.0/24 –sport 22 -j ACCEPT

It will allow only IP Address of 172.16.12.0/24 series to SSH the box. Similarly individual IP Address and range could be allowed.

I hope I have provided a complete info for anyone wishing to start with Squid. Requesting you all to put your queries, so that I could make this manual better and covering more and more aspects. Although work perfectly, but iptables part is little messy in my manual. I would welcome, if someone suggest some more flexible ways (preferably file based rules) with easy conventions.

I also recommend using SARG for daily/ weekly/ monthly online reporting as its effective and very easy to use. Here is how to implement it.

So, Enjoy a Happy Safe Browsing by SQUID.