Monday, July 10, 2017

WordPress Intro Notes


Tips
Change Permalinks in Settings to use non date format unless just blog. Can change per page.


Design:
Custom Pages
Page Templates and Template Partial (part of a page)
Plug-in Page
Error Page


WordPress Theme Structure
Style.css should be in the location of theme .html files
functions.php
Custom Files

These are default templates that are used when a more specific template can't be found

Page Templates
index.php is the only required template file... most generic...should be flexible
page.php is used to display pages
home.php is used for the front page and shows latest posts
front-page.php is used for static front pages

Posts Templates
single.php is for individual post
archive.php is for post archives
category.php is for posts of a specific category
tag.php is for posts of a specific tags

Template Partials
a section or part of a web page that is encapsulated in a template that can be pulled in to any page.
comments.php
sidebar.php
footer.php
header.php
search.php
attachment.php
date.php
image.php
404.php
author.php
rtl.php
taxonomy.php

Can combine them
i.e.
Archive-cat.php
Page-contact.php
Single-lady.php

Template Hierarchy
Determines how WordPress chooses a template file to display a page.
There is a flowchart of wordpress.org



Development
wp-config.php
Define('WP_DEBUG', true); // for development only
everything outside of wp-content folder is overwritten on update
themes are at wp-content/themes/mythemehere
create a directory here as shown above.
create the two required files for a theme.
style.css
index.php
In order for WordPress to recognize our theme we have to add the following to the style.css file.
The basic is:
/*
Theme Name: Some user friendly name for my theme here
Author: me
Description: Custom theme created for my company
textdomain: mytheme
*/

At this point it will show under Themes in WordPress.

Can also copy it from an existing theme such as Twenty Sixteen. It is just a comment at the top of the style.css file
that says the name of the theme, etc.

Notice there is no image showing a preview of our theme. Can add a Screenshot.png in the mythemehere directory. It can't be larger than 1200 x 900.

To move over my css, Copy my css into the style.css file. If using something like bootstrap that has its own css files use wp_register_style and wp_enqueue_style and add_action. Or more simply: <link rel="stylesheet" type="text/css" href="<?php echo get_template_directory_uri(); ?>/style1.css" />
This should also work: <link href="<?php echo get_stylesheet_directory_uri().?>'/style2.css'; ?>" rel="stylesheet" /> and appears to be built for purpose.

A template should not include the stuff in the header or footer. That would be in the header and footer templates.

A simple example for index.php would be

<?php get_header(); ?>
<div>some html</div>
<?php get_footer(); ?>

In the header.php include all starting with <!DOCTYPE html><html>. Add <?php wp_head(); ?> just before the end of the </head> tag so php can inject their own stuff. Delete title tag so WordPress can do it.

In the footer.php include all starting with footer text and the </body><html>. Add <?php wp_footer(); ?> just before the end of the </body> tag so php can inject their own stuff.

When referencing an image from the front-page.php file (for example) use.

If we want want to create a template for a specific page (not the front page) then use name the file something like page-contact.php such that page-<slug>.php. The template will show under the Page Attributes when adding the page. In this scenario all the page specific html and content can be in this file since it will only ever be used by this page. Effectively, the header and footer need to be removed and replace with the php calls to pull in the footer.php and header.php files we defined. Also any paths to images, etc will need to be prefixed with <?php bloginfo('template_url')?>. The downside of putting the content in it will be that it is not editable from WordPress and the theme files itself would need to be updated. BTW, to find what the slug, edit the page and go to Screen Options and check Slug. The slug will now show on the page.
<img src="<?php bloginfo('template_url') ?>/images/myimage.jpg">
If in style.css it would just be images/myimage.jpg assuming images/myimage.jpg and the style.css are in the same directory in my theme directory.

We can also create one for all pages (global). In this case name the file page_full-width.php for example where full-width is not a slug for any page.

To add menus to the header we can use WordPress's menu that can be edited using WordPress's tooling. Just add <?php wp_nav_menu( $args = array ('menu_class' => 'my-nav-class', 'theme_location' => 'primary')); ?> where the menu is to be used

To have the navigation also show in the footer.php add
<?php wp_nav_menu( $args = array ('menu_class' => 'my-footer-nav-class', 'theme_location' => 'footer')); ?>

Notice the Menus or Widgets isn't showing under Appearance menu in WordPress. We need to add the functions.php to file to our theme.

NOTE: there are no namespaces so function names can conflict with other vendor's functions. Best practice is to include the name of theme in function name.

Add functions.php to theme.

A basic file would look like:

<?php

if (! function_exists('mytheme_setup')) :

function mytheme_setup() {
add_theme_support('title-tag');
}
endif;
add_action('after_setup_theme', 'mytheme_setup()');

/* Register Menus */
function register_mytheme_menus() {
register_nav_menus(
array(
'primary' => __('Primary Menu'),
'footer' => __('Footer Menu')
)
);
}

add_action('init', 'register_mytheme_menus');

/* Add Stylesheets, fonts, etc */
function mytheme_scripts() {
wp_enqueue_style('mytheme_styles', get_stylesheet_uri());
wp_enqueue_style('mythemem_google_fonts', 'http://fonts.googleapis.com/css...');
}
add_action('wp_enqueue_scripts', 'mytheme_scripts');

Now we can remove those stylesheets and fonts from the header.php.

Now Menus is showing under Appearance in WordPress. Now we can use the Menu editor in WordPress to create our menus. Notice WordPress knows about Primary Menu adn Footer Menu as theme locations.

Widget Areas allow the content to be changed in WordPress by changing a widget. In functions.php we need to register it.

/* Add Widget Areas */
function mytheme_widget_init() {
register_sidebar( array ('name' => __('Main Sidebar', 'mytheme'),
'id' => 'main-sidebar',
'description' => __('Some descr', 'mytheme'),
'before_widget' => '<section id="%1$s" class="%2$s">',
'after_widget' => '</section>',
'before_title' => '<h2 class="widget-title">',
'after_title' => '</h2>'
));
}
add_action('widgets_init', 'mytheme_widget_init');


Add a file called sidebar.php

In the file add something like

<?php if (is_active_sidebar('main-sidebar')) : ?>
<aside class="sidebar widget-area">
<?php dynamic_sidebar('main-sidebar'); ?>
</aside>
<?php endif; ?>

Now Widgets will show up under Appearance in WordPress. Any of WordPress's widgets can be added to the area now. This includes the Text/HTML editor to allow for custom content each time we use the page template

Can use Custom Post type to allow users to use WordPress to edit content using form. Like a data driven template. These custom types show up in menu and edited like a database.


Security Concerns:
Brute force login
Don't use admin as username
Use secure password
Enable two-factor authentication
.htaccess to blog certain ip addresses

Comment Spam
Keep an eye on your comments
Tweak Discussion settings to limit links and blacklist keyowrds or ip addresses
Require users to register in order to make comment



Plug-ins
Security Plugins
All in One WordPress Security and Firewall Plugin
iThemes Security
Securi Security

Top Backup Plugins
BackUpWordPress
BackupBuddy

Top SEO Plugins
WordPress SEO by Yoast

Other
Broken Link Checker
Google Analytics
W3 Total Cache
Sync - for managing multiple WordPress sites

Saturday, May 20, 2017

Securing your ASP.NET MVC website Checklist

First, let me start by saying this is not a comprehensive list, but it is a good start.

Add headers for all requests

Add this to your web.config
<system.webServer>
    <httpProtocol>
      <customHeaders>
        <clear />
<remove name="X-AspNet-Version" />
<remove name="X-AspNetMvc-Version" />
<remove name="X-Powered-By" />
<remove name="Server" />
        <add name="X-XSS-Protection" value="1; mode=block"/>
        <add name="X-Content-Type-Options" value="nosniff"/>
        <add name="Strict-Transport-Security" value="max-age=31536000"/>
<add name="X-Frame-Options" value="DENY" />
<add name="Referrer-Policy" value="no-referrer" />
      </customHeaders>
    </httpProtocol>
  </system.webServer>

This does a good job of explaining what some of the header options are

Require Strong Passwords

Go to your AccountController and find the code that creates the PasswordValidator and change it to something like this. Length is the most important thing to consider from a cryptographic complexity. 

NOTE: 12 is the minimum required, but 16 is better to make it sufficiently time consuming to hack.

manager.PasswordValidator = new PasswordValidator
            {
                RequiredLength = 12,
                RequireNonLetterOrDigit = true,
                RequireDigit = true,
                RequireLowercase = true,
                RequireUppercase = true
            };

Remove ASP.NET Technology Headers


In Global.asax add the following to the Application_Start() event.

MvcHandler.DisableMvcResponseHeader = true;

You will also need to add the following to the web.config

<system.web>
<httpRuntime targetFramework="4.5.2" enableVersionHeader="false" />
</system.web>


Remove Server Info from headers

Add the following to Global.asax.cs

protected void Application_PreSendRequestHeaders()
        {
            if (HttpContext.Current != null)
            {
                HttpContext.Current.Response.Headers.Remove("Server");
            }
        }


Also read through security issues that require reviewing your code and maybe some knowledge of how your application is written.

Restrict origin of anything loaded

To be extra safe look at adding creating a white list of what stylesheets, scripts, etc can be loaded. This will take some digging on your site, but is probably worth the effort.


There is a nuget package that does some of this. This looks to be a better choice as it is per controller, etc and explains how to use it.

Friday, May 19, 2017

Code Contracts

Ever want an common way to do a null parameter check or that an integer is positive, etc. If so, you may find MS Code Contracts useful. The downside is that all your files then have a dependency on this assembly. The upside is they are available in the System.Diagnostics.Contracts namespace which is part of the mscorlib.dll assembly so it should always be available.

Code Contracts provide a way to specify preconditions, postconditions, and object invariants in your code. Preconditions are requirements that must be met when entering a method or property. Postconditions describe expectations at the time the method or property code exits. Object invariants describe the expected state for a class that is in a good state.

The key benefits of code contracts include the following:
  • Improved testing: Code contracts provide static contract verification, runtime checking, and documentation generation.
  • Automatic testing tools: You can use code contracts to generate more meaningful unit tests by filtering out meaningless test arguments that do not satisfy preconditions.
  • Static verification: The static checker can decide whether there are any contract violations without running the program. It checks for implicit contracts, such as null dereferences and array bounds, and explicit contracts.

Thursday, May 18, 2017

OWASP Top 10 Security for ASP.NET tips you may not know about

Below are my notes on what I thought was important out of the OWASP Top 10 Web Application Security Risks for ASP.NET

SQL Injection

Havij for testing for SQL Injection on a web url

Encoding output

Use the appropriate encoding (escaping character or character sequences) for the context you are using the input.

Use AntiXssEncoder.HtmlEncode() to HTML encode input when using web forms. (Available in NuGet or .NET 4.5). For example, use this before rendering user input what to the screen. <%: %> Syntax will automatically encode it.

ASP.NET MVC Razor automatically encodes for HTML unless you tell it not to. For example, on Model/ViewModel add the [AllowHtml] attribute to the property you want to allow.

Use Microsoft.Security.Application.Encoder.JavaScriptEncode() to encode input for JavaScript. For example taking input and assigning to a JavaScript variable.


You can turn off request validation at the site level (web.config) or the page level. See here for more details.

Hiding Payload

Url encoding can be used by hackers to get around XSS detectors and make the payload unclear to the average user. Another approach is to use a url shortener like tinyurl.com.

Session Persistence

Don't use the url for any sensitive information since they are in web logs, browser history, etc. A session id is definitely sensitive information that can allow someone else to be you while that session is still active. Cookies are a much safer place to pass and persist session ids. The only downside is cookies need to be enabled, but most people have cookies enabled. Cookies is the default behaviour for ASP.NET. Be sure to never send cookie with secret in it over an insecure connection.

Session Timeouts

In the case of a sliding forms timeout it is nice in that it can be extended forever by hitting the url and thus give a large window were someone can use a hijacked session.This is great for valid users and hackers love it too. So, turn off sliding timeout if you can to increase security.

The alternate is to set a fixed session timeout, but the problem is users will lose their session at the end of the timeout no matter what they are doing. There is no perfect solution for all cases. Set according to your needs to strike a balance between security and convenience for users. Do change default values to meet specific needs.

Indirect Reference Map

Indirect references can be used to conceal internal keys, but they are NEVER a substitute for access controls. Each internal id is replaced with a temporary indirect reference (that is stored on the server in the session for example and never exposed to the browser / user). This temporary indirect reference is cryptographically random and has no pattern for guessing. Once the session ends this mapping should expire. The map should be user specific so that it can't be used by any other user. This greatly reduces the ability for an attack by limiting who can use it and limiting the length of time it is valid and making it not guessable based on a pattern. 

Thought on GUIDs. A GUID is not a map. It is unique and does not have a pattern. They should be viewed as obfuscation of the key. They are not user specific and proper Access Control is a MUST if GUIDs are used. They do have the advantage they they cannot be enumerated easily and close to being Globally Unique. A better choice would be to use System.Security.Cryptography.RNGCryptoGraphicProvider.GetBytes() then HttpServerUtility.UrlTokenEncode() and a map instead of a GUID.

Access Control

Just because the url is not visible in the browser's url bar don't assume that a hacker can't look at the source of the page for urls to hack. The best defense is to add code on the controller action that checks the direct and indirect referenced keys belong to the user that is sending them. For example, if the user was passing the id of the record to be displayed, the controller action that displays the record should check that that user has access to that record.

Cross Site Request Forgery

Put simply it is a way to trick the browser to making valid request from an evil site by exploiting the fact that cookies are sent for all requests from that domain. The evil site make a request that is identical in form to the request that the original site would have done, but with malicious payload. It could for example, take advantage of an authentication cookie that was not secured property. 

ASP.NET MVC has a mechanism that adds randomness via a CSRF token.  This token is known to the legitimate page where the form is as a hidden field and the to the browser via a cookie. When the request is sent to the server both the hidden value and the cookie are sent and the server compares them and they must match. The trick is the hacker won't know what the value to put in the form so the attack will fail. This is actually very effective protection.

To implement this, you need to do two things. Add the attriubute called ValidateAntiForgeryToken to the controller action. Also, add @AntiForgeryToken() to you view just inside the BeginForm() brackets. 

Once you implement this, you will see a __RequestVerificationToken in the Request body when the request is made. There is also a matching cookie called __RequestVerificationToken. When the CRSF attack is executed the authentication cookie and the __ReuestVerificationToken cookies are sent with the attack request, but ASP.NET MVC returns a 500 error because there is no (or invalid) form field called __RequestVerificationToken in the request. The hacker doesn't have direct access to the cookies, but they are sent automatically by the browser. Thus the hacker has no way of know the CSRF token value. Attack has been stopped.

NOTE: There is also an Authorize attribute that is often the first line of defense, not as useful for CSRF attack since often authenticated users are the ones tricked into making the attacks.

NOTE: Checking the Referrer site can be helpful, but doesn't protect from CSRF.

Trace.axd

It contains such as cookies such as authToken, CSRF token, etc potentially connection string, versions of software, etc. Luckily the url is disabled by default. This can be enabled in MVC and Web Forms. Use a config transformation to remove the trace node on the Release configuration in case it is ever enabled in the web.config file.

Encrypt Connection String in web.config

It is good to have multiple layers of security. In case a hacker is able to access your web.config you want to limit what he can see. The connection string is quite important and should be encrypted.

To do so, run a command prompt as Administrator and execute:
aspnet-regiis -site "name of site in IIS" -app "app name or /" -pe "connectionStrings"

This uses the encryption key on the server and is server specific. So, it must be executed on that server.

The remaining risk is that someone can go to the server and run a the command to decrypt the string, so limit who can access the server.

Enable Retail Mode

Retail Mode prevents leaking of exception data on YSOD even when configuration is wrong for ALL applications on server. To do so, add the following deployment tag to the machine.config. This forces the same behavior as enabling customErrors. It is a safety net for all applications. 

<system.web>
      <deployment retail="true"/>

Password Rules

Make it harder for hackers by requiring longer passwords, not allowing words in the dictionary or variations of them, and require special characters, mixed case, numbers.

Storage of Passwords

If a hacker can get the username and salt and the hashed value and the operation used by the system to do the hashing then they can brute force attach and recover over 65% of the passwords (on average). To do this they simply call the hashing operation with a password from their list of likely passwords using the salt that they gained access to. Now they compare the hashes. If they are the same then they know the password that was originally used by the user. They now have everything needed to login as that user.

Membership Provider Default Implementation use near useless because we have both the hash and salt. Also SHA1 is too fast of an algorithm which means it can be cracked faster.

Be careful if someone has used your password on the internet and that hash has been compromised then a simple google search for that hash can show what the original password was. If a long salt was used then rainbow tables become useless. Salts make it harder to crash a hash, but it is just a matter of time and with Modern GPUs computing 7.5 billion hashes per second (in 2012) it is literally just a matter of time.

Check out Kerckhoffs' principle if you are wondering if your security is good. Basically it says, if you are not willing to give the design of how your security works and are thus relying on this lack of knowledge as part of the security then your security is not good enough. You must assume the attacker will learn how the security works soon enough.

Cryptography-Hashing

Faster algorithms are not good ironically. We actually want the hashing algorithm to take as long as we can stand. This is balanced with the processing required to login or register for example and how long it will take to hack. SHA1 is probably not sufficient anymore. One way around this is to apply the algorithm say 1000 times, but again that probably isn't enough. BCrypt allows us to directly control how many times we apply the algorithm.

No algorithm is fool proof especially when lists of password and hashes and salts are cataloged. It is all about making it too difficult / time consuming / financially unjustified, etc to do it. Even the most complex algorithms can be circumvented by cataloging the input and outputs.

Cyptography- Encryption

Encryption is less desirable than hashing because if the key is obtained then all the encrypted values can be decrypted and the original values are available. With Hashing it is a one way process and the original value cannot be obtained using the algorithm (must hack as noted earlier). 

The trick with encryption is that you must manage the safety of the key. DPAPI uses the machine key on the server the code is running on to do the encryption and decryption (symmetric algorithm) so the application does not have to.

Password Hacking tools

hashcat - advanced password recovery - brute force for comparing hashes and common hashing algorithms
RainbowCrack - Use rainbow tables to get a pregenerated list of hashes meeting different rules for password.

Restricting Urls in MVC

Don't use web.config location tag permission restrictions because it is based on url, not page. In MVC more than one route can point to the same page, but only the urls used to access them is protected in the location tag in web.config. This means if you have two routes you have to have to location tags in the web.config. This could lead to very buggy and inconsistent access. It worked well in Web Forms because the page is the url.

For MVC you want to protect the Controller and actions of the controller. The way to do that is the Authorize attribute. Just using it requires authenticated users. If you want to restrict to a role you can comma separate list them as a parameter to the Authorize attribute.

Don't forget to protect resources like JavaScript, reports, ajax calls, reports, API's, pdfs, etc. Really anything that is not in the browsers url bar, but does show up in the network traffic.

Side note: Never send sensitive data in the url because they can be in web server logs, error logs, history, etc.

Just because a url can't be guessed doesn't make it secure. Urls get leaked through many different routes such that they don't have to be guessed.

Insufficient Transport Layer Protection

If TLS (Transport Layer Security) is not properly done it opens up the opportunity for a MiTM (Man in the Middle) attack. This can be done by physically tap an ethernet cable, intercept traffic at the ISP level, monitor unprotected traffic at a wifi hotspot, or create a rogue wireless access point. 

With MiTM the attacker can see all http (not https though this is debatable) traffic from the victim. This would include any cookies that are not secured with HTTPS. All cookies that have sensitive information (arguably all do) should only be allowed to be sent via HTTPS (and the cookie not sent if http is used). 

To do this in ASP.NET WebForms you need to set the cookie with Secure to Yes by changing the web.config by adding a <httpCookies requireSSL="true" />. It can also be overridden per request using Response.Cookies.Add(). When you do this if you hit a page that uses HTTP the cookie will not be sent. If this is the authentication cookie it will ask you to log in again (won't be able to) if you access a http link from and https page because the auth cookie is not sent for the http page.

You can also require HTTPS per controller action by adding a RequireHttps. This will make sure the url cannot be access to HTTP. It is best to direct them to the HTTPS version instead of relying on a redirect from http to https. 

As a best practice and to avoid a browser warning that the entire site should use http instead a mixed of http and https. This can show up when the site requires https, but a script tag references http:// as the source. A clean way around this is to use protocol relative url. To do this set source="//hostname/somelib.js". Basically just remove the http: or https: from the url. It will then assume the protocol of the page (i.e. https). Be sure the url you are access supports both http and https.

As a side note, when using a load balancer the request comes to the load balancer  as HTTPS, but by the time it gets to the web server itself it is no longer HTTPS (it is HTTP). In this case, the load balancer typically adds HTTP X-forwarded-proto header headers instead of built in secure cookie attribute. This will require custom implementation.

HSTS is HTTP strict transport security and tells the browser to never make a http request to the site. The header Strict-Transport-Security does this. It has its limitations and patchy in browsers, but is still a good line of defense. it does require the certificate to be trusted.

HSTS can be implemented in an ASP.NET MVC application by adding the following to your web.config

<system.webServer>
       <httpProtocol>
      <customHeaders>
        <add name="Strict-Transport-Security" value="max-age=31536000"/>
      </customHeaders>
    </httpProtocol>
  </system.webServer>

Do NOT show a login form over HTTP (use HTTPS), even if the post is to HTTPS. The reason is that a MiTM attack could inject code on the page to do something like get form values (username and password) and send them off to another location in parallel to the normal submission.

Do NOT load HTTPS login forms inside an iframe on a HTTP page because the parent page is vulnerable and could be manipulated to load a different login form into the iframe. A better choice would be to show the actual login in a full screen window so users can see url and the secure icon in the browser.

Do NOT put username and passwords in urls because they are often in web server logs in plain text, etc.

Unvalidated Redirects and Forwards = bad reputation

They are useful to attackers because the unvalidated redirects abuse the trust the victim has in the site they trust.

Imagine you have a site and you want to track when a user clicks on a link so you have a redirect action on your controller that takes a target url that the user is to be redirected to. Another is how ASP.NET takes the user to the login page when they try to access a page that they don't have access to. In this case, the target url is an internal url or atleast is expected to be. 

The problem comes when a hacker manages to change the target url (when it is not validated) in the redirect link. From the users perspective the site they are on is one they likely trust or looks legitimate, but when the hacker gets to change the target url they can take them where ever they want. The user is harmed in this scenario and the site with this link is also. 

Now imagine the user receives a spam email and the link is something like http://1.usa.gov/OYCBM7. It could be via email, social media, compromised legitimate sites, etc. This could come from a url shortner site to make it difficult to tell where it is coming from. It also has a .gov so people will likely trust it. The long version of the url could be something like http://trustedsite.com/redirect?url=http://evilsite.com/malware. It could also have the query string encoded to obfuscate it.This also gets by blacklist detectors.

If you don't validate redirects and forwards on your site you are a potential target hackers to use your site for their evil ways. To protect your site, you need to validate the querystring before you redirect to the target site. The best way to do this is have a white list of acceptable urls. The white list could be a regex, string literal, int, list, etc. This would be in the action on the controller, etc where the redirect is done. 

There are scenario where you can't use a white list. In this case, you can check the referrer (UrlReferrer) in the request to determine if the user came from our site or some other site. In particular, the UrlReferrer will be null when request came from a non-browser request such as email client, twitter client, pasting into url bar in browser, etc. We can also check if the request is from our site (Request.UrlReferrer.Host != Request.Url.Host).

This is not 100% risk free. For example, Referrer header value can be faked / changed to be whatever and circumvent checks. This is not what happens when a victim follows a link though.

Security Related Sites

nakedsecurity
hak5
Troy Hunt

Friday, May 5, 2017

How to configure ASP.NET Custom Errors Correctly

If you are deploying your site you should make sure you have custom errors on so you don't leak information that a hacker could use to attack your site.

Enable Custom Errors

Setting customErrors to On will keep exception details from user, but shows the YSOD which is a 500. Hackers look for pages with 500 error codes as potential targets.

<configuration>
      <system.web>
            <customErrors mode="On">


Add a user friendly error page 

The downside of this is that pattern of the url still indicates that there was an internal server error. Again, highlights a potential target for hackers

<configuration>
      <system.web>
            <customErrors mode="On" defaultRedirect="Error.aspx">


Get rid of the error page pattern in url 

The response is returning a 200 which looks like a successful page. There is no 302 redirect to detect the error either. The only way to tell there is an error is to read the message on the page and can't be determined by a pattern or status code.
<configuration>
      <system.web>
            <customErrors mode="On" defaultRedirect="Error.aspx" redirectMode="Rewrite">

Static Content Generators

Why have a dynamic site if all you are displaying is static content. Static Content Generators allow you to easily edit your site in a similar way to a dynamic site but without all the over head when you host it. The advantage is you get low server requirements to host your site and it is very fast compared to a dynamic site. You can save money on hosting fees and your users will have a better experience.

As an added benefit your site is more secure because there is less software required to run your site which means less surface error for attack. This is true for the code required by your website and the third party software you need to run your web site such as ASP.NET, etc.

Keep in mind there are definitely times when a dynamic site is necessary, but if you page is the same for every user on your site you may want to consider static content generators. Here are some useful links.

Site Generators

Jekyll - one of the most popular static site generators.
Hugo - easy way to do blogs, but also docs, portfolios, etc.
JAMStack - JavaScript - APIs - Markup. Javascript based solution. Worth looking at.


Visual Editors

NetlifyCMS - plugs into any static site generator, provides graphical UI to edit content for your site generator.
SiteLeaf - content managements in the cloud. Looks like WordPress. Uses GitHub

Free Hosting

GitHub Pages
Setting up a custom domain for GitHub Pages
Video showing how to host a site on GitHub Pages and add a custom domain to it

Wednesday, April 5, 2017

SEO Basics (Technically)


Myths

  • Meta Keywords in header - they don't help and are dead.
  • Meta description in header is important though. Shows 155 characters in Google, 165 characters for Bing.
  • Make the title tags unique. Google allows 62 character and bing allows 57 characters. Longer ones get trunked. Shorter titles have higher click through rates.
  • Adding keywords to title tags can be penalized (see Google Penguin). Don't use the same keywords in title and keyword description, but can use variations of your keyword. Write titles for people, not a list of keyword
  • <strong> is the same as <b> and does not give advantage
  • keywords in h1 tag doesn't work. Don't worry about h1 tags for SEO
  • There is no perfect Keyword density
  • Article Spinning creates worthless content
  • Minimal content is ok
  • Linking to other sites is ok and even good.


Meta Data

  • robots - index,follow
  • facebook (open graph): og:*... use them

Canonical

  • Don't use syndicated content as it will only strengthen the provider of it, not your site.

Rich Snippets / Structured Data

  • Highlight the keywords in description google shows.
  • See Google's Data Highlighter in it's webmaster tools.
  • See schema.org for non-google data highlighting. See also Micro data.

Headlines

  • make interesting
  • short sentences with only one idea (12 - 14 words)
  • Keeping short paragraphs, they encourage the users to continue reading
  • 8th grade level of reading is best in many cases.

Internal Linking

  • Use them in natural ways and when adds value.
  • Follow how wikipedia does it.