A Blog about Linux, Open Source and Code! 
Symsys Inform Blog Home

Symsys Ltd Text logo in the banner area





 

 

I’ve decided to re-write my PHP How To’s. This is the first, it doesn’t actually include any PHP, instead this tutorial simply shows you how to setup a basic HTML form with two input boxes and a button, the tutorial following this one will explain how to write a php script that will receive the details from this form, process them and then do something with the result.

This tutorial assumes that you already have a relatively in depth knowledge of HTML and how it works, if you don’t then stop reading now, go and learn a little HTML first, before you try starting with PHP, if you don’t understand the supporting language of HTML then you will not understand what the PHP is doing. I also assume that you have some form of a test server, be it a server or shared hosting that supports php, you must have some way of testing your PHP code for this tutorial to work, if you don’t have one, get one. I will try to write a tutorial on installing a basic LAMP (Linux Apache MySQL PHP) server at some point in the future and link to it here, but as yet I have not written one.

So after declaring our HTML DTD and putting in our standard html, head and body tags, we need to start writing our form. We are going to use label tags here because in a later tutorial I’ll show you how to use them to properly space out your form using CSS and of course we are using input tags for our text boxes and submit buttons. The most important attributes of these input tags are the “name” tags, these are the identifier tags, which will get passed across to our PHP script in the next tutorial via the POST method.

Once you’ve got your form laid out the way the code shows above (Yes at this point you can copy n paste a little, but beware you need to understand what you’re copying and pasting or the other tutorials may not work properly and you may not benefit from them) you should be able to upload it and see a basic HTML form in your web browser.

Save the file as something simple like “my-first-form.php”, save it as a .php file, this way we can test that the PHP engine is running first and foremost but later we are going to do error handling and that will require the form to be a php document. It’s not pretty right now and it’s nothing special, but it’ll work for what we want.

Your form should have a textbox to enter a name, another to enter your age and a simple submit button. If your form does not show the way I’ve described, then something has gone wrong and you should start again or you may need to look at your server setup if the page does not display at all. If your form displays correctly then all is well and you can move on to PHP Tutorial 1.1 – Processing simple form input using PHP and the POST method.

The code in this tutorial is under no license whatsoever and is completely free to be re-used by anyone and for any purpose. No warranty of guarantee is provided with this code and it is used, re-used, re-distributed or sold at the persons own risk. In no way is Symsys, or the author of the tutorial, responsible in any way, for the way this code is used by a third party or how it may be developed by that third party. Please use any and all code here responsibly.


Filed under: PHP ... Comments (4)


  

 





Author:  Gremlette
November 21, 2008



 

 

Laycat, Kyklo, what next?…and even admits is ‘cloaking’ itself

When I was looking through my November website logs, Laycat and Kyclo were of the highest visiting robots above Yahoo and Google. Of course, I googled it to see what on earth it was and sure enough other people were also complaining it was their highest visitor.

It is a relatively small cross-section of web designers and developers that actually look through their records and we’re one of them, the hits from Kyclo and Laycat were too big to ignore. Only a handful of people at the time reported about this particular Robot, some said that they were getting a minimum of 550 hits  eg http://jagf.net/blog/?tag=laycat,

For a short period Laycat.com issued a web crawler notice on their site saying that they were simply gathering information for a new search engine…. and that was good enough for some, since a poster had copy/pasted the robot notice on a forum. The robots are sporadic, keep changing names, hit A LOT and the links to their website did not have any information on multiple occasions they were checked therefore this post was originally written. It looked a bit dodgy.

Now that this post was brought to the attention of Laycat/Kyclo, the very plain robot information page is back online, after being assured by the admin at Laycat that it must have been temporary down-time when I was looking.

There are currently 3 known robots all named differently operating under the same people. (rather odd – and how many more are there?) Kyklo.com, aceleo.com and and laycat.com. Not to tell someone else how to run their operation but couldn’t you simply use 3 different server names at one domain, for example kyklo.laycat.com aceleo.laycat.com and laycat.laycat.com? This might make people slightly less suspiscious of 3 different robots with completely different names linking back to the same place.

http://www.kyklo.com and http://www.aceleo.com all redirect to http://www.laycat.com/, – Don’t expect anything too fancy – it’s just a plain robot information notice blurb - no site, no branding or company information, nor anything further, plus despite being asked for further details on several occassions, they with not oblige and instead want to insist we change our public and might I say rightfully free, opinion of it, without further information, I’m sorry if that’s the way I ran my life I’d be a devout christian who thought science was just the devils way of trying to trick us because I’d be ignoring all evidence and putting my faith in the hands of someone elses words.

The admin at Laycat have been extremely bitter and resentful about their bots being mentioned on here in a skeptical light. Their initial contact was immediately followed by the post being re-titled,  their admin being thanked for the 3 links above and thanked for their Robots text being re-issued online…. I got told I was being ‘Nasty’ !

Without further aggrevation, Laycat admin continued to bombard us with very long comment posts laced with further derogatory comments, calling us ‘undocumented trolls’, using childish tactics of posting word counts of his posts, due to the fact we said the comments length may have been something to do with Askimet Spam canning his comments. Ripping our post and comments apart line by line  (Just like what would normally be considered “a troll” on most forums/blogs) with negatively verbose responses etc. We were painted as simpletons, writing rubbish to just drive people through our affiliate links (hardly advert city here with a maximum 4 links placed for layout aid vs 30+ links to our own site and services), we just wont stand for that, tell us we’re wrong by all means, but provide proof of it, don’t just bombard the comments with links and excuses.

Laycat (also aceleo and Kyklo…. even though I was told that it was kyclo not kyklo by Laycat even though the Kyklo website is kyklo.com), they have an absolutely stinking attitude to say the least. Given Laycats response, the dawn of a new search engine being the reason for these robots has become highly unlikely in our minds, and if it has that sort of childish mentality at the head of it, then frankly we don’t need it. Considering the type of responses that were given, we find it is far more likely this new search engine will be the next “Web Ripper” and not a search engine at all. Due to the nature of our site in comparison to the nature of his comments, we have been forced to remove ALL comments and re-write this post appropriately and close further comments, if admin@laycat.com would like to further comment on this post, we invite him to use our contact form http://www.symsysit.com/core/Symsys-Contact-Details.php to do so, beware though if you fill your email to us with lots of links, a massive character count, swear words etc, then our Web Spam filter will probably pick it up as well.

As repeated in all of Laycats comments, it is highly recommended, that their bots be blocked in the form of IP banning and robots.txt block lists if you think they may be maliscious – I am only repeating the advice given by Laycat admin here and just to please him, since he thinks we have such a controlling effect on our readers, I must molly-coddle you all by saying, “We encourage you to make up your own mind and this post is purely for informational purposes, we are not the definitive voice on the internet” – Laycat do you feel re-assured that we still don’t like your bots but have told our readers to make up their own minds? Readers do you feel re-assured that you’re not being “ordered” to believe what we tell you to?

Laycat, Kyklo, Aceleo maliscious?…..I say HELL YES … well, the admin certainly is!

Paranoid?….. YES  :)  lol, maybe just bored. At the end of the day, it is your site, you should be able to control what drive though taking your information to some extent, be it on the Internet or not. I’m now off to put on my tin hat, install barbed wire fencing around my house and instruct my datacenter to restrict all traffic to and from my server, just because I feel like it!

Our crawler has visited your web site?

Do you have any questions?
1) Why is your robot visiting my web site? Laycat crawler is a web documents indexing robot.

5) What is the search engine this web crawler is working for? The search engine this crawler is working for is currently in an early
development stage, and will go public as soon as we achieve the beta stage.

His job is to retrieve millions of pages from the world wide web
in order to feed a search engine. 

6) Why is your crawler using an anonymous user agent? 

Many documents found on the internet are generated dynamicaly, and may present
different content to crawlers than they would to regular visitors by examining
the user agent string. Examples of pages adding links to gambling, adult
content web sites when a crawler is visiting are plethora.

This practice is called cloaking, and the goal is to fool crawlers and
search engines in order to make them index some different content
than a normal person would actually see.

This is what we might call search engine spamming.

To avoid that kind of practice, the crawler uses an anonymous user agent,
and it will remain that way until we have enough data to do it the best way.
At this point we will of course consider using a dedicated user agent.

Most antivirus software use the same method as we do when scanning web pages.

There is no real need for a webmaster to detect a crawler using the
user agent string since this crawler respects the Robot Exclusion Standard,
and webmasters can decide to allow him to visit or not using this standard.

Please also note that the crawler will never fetch more than one page every
two seconds on a same IP address, thus never eating server's resources.=4

Filed under: Robots + Htaccess ... Comments (0)

Tags: , , ,
  

 








 

 

I think there is nothing worse than owning a few domains and having nothing on them and so, we are starting to go through both new and old domains. First up is www.websiteNZ.co.nz.

It is a fantastically easy to remember name and certainly no tongue twister to tell anybody over the phone or remember from a passing advertisement. It will be very aparrent that a lot of our time has been invested in the Symsysit.com website but once we end up with a design that works and functions exactly how we would like, it then leaves less room for play.

Pure CSS, No Script, Tableless Horizontal Website

The new Website, www.websiteNZ.co.nz has been a refreshing technical challenge over that last few weeks. The goal set was a crazy idea to make a site that is horizontal, no plugins, no script, NO TABLES allowed using valid XHTML Strict and CSS 2.1 maximum. Also, to look right on my 22″ widescreen running 1680×1050 and equally look right on a 17″ square 1024×768 screen be it IE6 or Google Chrome.

Argh I hear you say….. and yes BIG ARGH it was.

Some of the problems to overcome when creating a Navigable horizontal Website.
When placing navigation to skip back and forth through a single horizontal page using anchors, it becomes a virtually impossible nightmare.

One of the main issues is Internet Explorer.
ALL other browsers known to man will skip to an anchor aligned to the right of the page, to show the page it belongs to.

Mictosoft IE based Browsers do not know their left from Right

So, basically any web designer trying to do a site like this for IE only would have to revert to a mentality of that lower than Kindergarten student. It would be like being a passenger in a car driven by an American in NZ or the UK tackling their first roundabout…. understandably very confusing and very messy unless you are in the know.

See the diagram below for the result. The page will never quite get to the screen in an IE based browser unless you

A) give the anchor it’s own dedicated cell to bounce left to right in. and

B) write double the code to alternate left and right for Everyone else with common sense versus IE based browsers.

Thanks a bunch Mictosoft! But then I suppose it is fitting to have so much unnecessary bloated code to satisfy the conditions. The rest of the bloat is to cover up their opposite understanding of margins and paddings and magically invented extra pixels solely by guess what – …. IE Based browsers.

Microsoft Kindergarten Level knowledge of Direction

Microsoft Kindergarten Level knowledge of Direction

The best web design practice, is to design and test for Chrome and Firefox, then add adjustments for IE7+ then again add adjustments for IE6 and other inferior browsers. This way, you know your CSS is correct even when IE decides to do the exact opposite of what it should be doing.


Filed under: CSS,XHTML ... Comments (1)


  

 





Author:  Gremlette
October 25, 2008



 

 

How to hide entire blocks of code from GOOD BEHAVING browsers.

Now that most web coders / designers are putting advanced effort into writing decent compliant code for their sites, there are a variety of ‘fixes’ that have to be employed to cure problems in old bad behaving browsers such as IE6 and below. 

MANY ‘get arounds’ involve miles of Javascript code which is completely useless as well as resource-hogging when many users block javascript completely for security purposes. The reasons that people hold onto old and ancient technology such as IE6 is due to a completely misled belief that it is more stable than the upgrade and more secure. The MAIN problem is IE6 and where I agree that IE7 is not ‘all that’, it is now usable and closer to compliance and good security than IE6 is!
At the end of the day – IE6 insistent users are STUBBORN, so although it pains any decent coder to work with and doubles the billing time on a clients web development sometimes, we have to try to work with it.

SAVE HOURS OF CSS HELL. If your visitor insists on ancient technology and blocking Javascript, then they have to EXPECT not being privvy to some modern compliant effects and functions. Example: users of programs such as ‘NoSCRIPT’ have no idea what ‘Digg’ and ‘Delicious’ are because they never see it and cannot even add RSS feeds to thier Facebook account…. similarly, whateverhover and other Javascript menu hacks designed purely for IE6 and old browsers become completely pointless.

So, you have a nice pure CSS styles functional section such as an active menu that

a) Requires ZERO Javascript Whatsoever
b) Is SUPER fast
c) Super clean and clear
d) Loved by Web crawler robots etc

BUT it won’t behave for stubborn stick in the mud’s like IE6, the likes, and below users.
The benefits of non dependant and compliant code is far too good to not use.
YET you have code that old browsers CAN work with that does the same functions yet still not javascript etc dependant (though may not be as nice)…. yet you don’t want BOTH to show up on the same page.

 Keep two versions and STAY Javascript Free

1)  Hide the GOOD STUFF from old browsers

Your two lumps of code ‘GOODstuff’ and ‘ALTstuff’ will be defined in <div> tags.
eg

<div id=”GOODstuff”> all the HTML in here</div>
<div id=”ALTstuff”>repeat of the HTML in here</div> 

In your relevant style sheet hide GOODstuff from old browsers

#GOODstuff {display:none}
html>body #GOODstuff {display:block}

Anything within the GOODstuff div tags including relevant styling will be hidden from old browsers but shows in Good behaving browsers.

2. Hide ALT STUFF from good browsers

#ALTstuff {display:block}
html>body #ALTstuff {display:none;} 

Basically, Good browsers will follow the html>body to NOT display the ALT code div contents.


Filed under: CSS,Code,XHTML ... Comments (0)

Tags: , , ,
  

 





Older Posts »

Enter your email address:

Delivered by FeedBurner