Hey everyone! Today I’m going to talk about the life cycle of a piece of software. Almost all software goes through a similar development process: initial development, alpha build, beta build, release, then maintenance.
Initial development is all of the work that goes on before the software is even functional. After initial development, the first build that’s usable is called the alpha build. It’s known as a minimally viable product (MVP) – the most bare-bones usable version of the software, where many of the features won’t be enabled and bugs are expected. They’re mostly for internal testing.
Once the alpha build has been tested and developed further, companies will transition it to a beta build. This is generally the first point that a build is released for testing outside the development team, sometimes hand-picked by the developers, and sometimes open to the public. Betas are for quality control – they’re mostly complete versions, but bugs are still expected, and the purpose of releasing it outside of the development team is to find these bugs and fix them before the full release.
Once the beta build is finished, the software is released in its full version, but most software these days will continue to be updated after its release as well. These updates generally fall into one of three categories: hotfixes, feature additions, or major releases. Hotfixes are quick to release and generally small, sometimes not even big enough to be mentioned. These generally fix bugs or small issues with usability. Feature releases are larger updates to the current edition of the software, expansions to functionality or performance. A recent example of a new feature added to SiteManager is the filters added to the Commerce -> Jewelry page. Both hotfixes and feature additions have a relatively short development time, though larger feature additions can be longer term projects.
Major releases are whole new versions of the software. They usually include a lot of new code and are generally incompatible with previous versions. For instance, SiteManager major releases are all numbered and completely distinct from each other – currently, we’ve had 6 major releases, so are on Version 6. Once a new version has been released, old versions will often continue to be supported, but only for a limited amount of time. Most of the time, major releases are built because there are new features or structural changes that the old version wouldn’t be able to support, and it requires at least a partial rewrite of existing code.
One of my favorite parts when polishing off a website design is choosing the images. Images can 100% make or break your website. They reinforce the aesthetic and give your site so much interest! How do I source my imagery? I either use istock images or brand/vendor images.
Let’s start with do’s and don’ts while choosing istock images:
DO use well edited, lifestyle imagery.
DON’T choose people looking cheesily right at the camera because a lot of these look like cheap stock imagery and I’ve honestly recognized istock people on other websites or even seen them on billboards in real life! “Oh my goodness! There’s generic Gillian from istock!”
DO use candid moments if you want to show lifestyle imagery of people.
DO choose a selection of diverse images.
DO choose highly styled images if you’re going for a high end look.
DO use a person making eye contact on rare, rare occasion. Usually professional brand images do this well.
__________________________________________
Now let’s talk about how to group these images together, make your website cohesive and have the same tone throughout.
DO stick with the same tone whether it’s cool or warm. Here’s an example of warm tones.
DON'T show the same body part over and over again. Too many hands!!!
DO mix it up and show an arm, then a face then perhaps just jewelry on it’s own.
Lastly, I love using images from a vendor/brand especially if it’s all from the same styled shoot. The images will automatically go together perfectly. You could choose from any of these below for a curated look.
There are a couple of ways to direct & guide your user’s attention. We talked a while back about hierarchy with headers but here are some more ideas! Humans tend to look in the same direction as people they see in ads/images.
As you can see in the image above, more people are reading the text when the baby is gazing in that direction. Isn’t that cool?!
Another solution is to literally doodle an arrow to direct focus to a CTA/button or a form to fill out. Here is a heatmap below showing the results.
Hey everyone! It’s Friday, so you know that means another dev dive. As we head into the holiday season, I want to talk a little bit about one of the most important things to improve performance on the web – cache (pronounced “cash,” not “cash-ay”). Caches are actually used almost everywhere in your computer, down to your CPU. Don’t worry - I won’t get down to that level, but I do want to talk about how caching helps websites.
As amazing as the internet is, one of its main limitations is that compared to your computer, it’s slow. Even the slowest SSDs today can read data at around 500 MB/s (megabytes per second). The fastest internet connections in the US today max out at around a quarter of that, and the average internet connection is less than a tenth that fast. This means that to improve speed, your computer will avoid using the internet if it can. You may have heard of “browser cache” – when you load a website, your browser will temporarily store some of the data (such as images and JavaScript files) so that when you load the page again, it can access those files from your computer’s storage rather than having to use the internet. This makes loading the page much faster, but if for some reason those files change, your computer might still load the old version. If our Customer Success team has ever asked you to hard refresh your browser, what we’re really asking is to force the browser to ignore those cache files and re-load them from our servers again to ensure that you have the latest version of everything.
Browser caching is what’s known as “client-side” – i.e., the cached files are stored on the client (your computer). At Punchmark, we also implement “server-side” caching – cache stored (you guessed it) on the server. Many parts of our platform require getting a lot of pieces of data from different places and putting them all together – config values, item data, HTML templates, etc. This can be costly and take a lot of time. Once we have all of that data together, though, we can store it all in one place as a cache file so that the next time we need the information for that page, we can just grab the cache rather than have to go fetch the data all over again. This helps to take load off of the databases and web servers, which even further increases the speed benefits.
Let me know if you have any questions about caching or have any other topics you’d like to hear about for future dev dives!
Hey everyone! This is part 2 of the dev dives on images. Today, we’re going to focus on what Punchmark does to automatically make your images perform as well as possible. There are a few main things that I’d like to discuss: delivering images from a CDN, automatically optimizing images, and supporting WebP images platform-wide. I’ll also be giving a sneak peek into one of the big improvements we’ll be implementing in the future, so read to the end for that.
To ensure that images are loaded as consistently as possible, we serve them from a CDN, or Content Delivery Network. A CDN, in basic terms, stores content on multiple servers in different physical locations. This allows for two main benefits. Firstly, because the content is stored on multiple servers, it allows for a level of redundancy – if one server goes down, another server can still serve the content. Secondly, by serving content from a server that is physically closer to the user connecting, it decreases the latency between the server and that user. CDNs also store cached versions of the content, which further contributes to the speed benefits.
We talked a lot in the last dev dive about the ways to optimize an image, and we actually do some of that automatically with our new image optimizer tool in SiteManager. When you run the image optimizer, a few things happen. Firstly, any image that isn’t in the CDN will be added. Secondly, if the image is high resolution, the image optimizer will automatically remove some of the unnecessary resolution. It also converts images to WebP and compresses them slightly to minimize file size. If one of your pages is loading slowly, this is a great first tool to turn to. However, keep the principles from the last dev dive in mind – if you know that you can reduce an image even further than the image optimizer would, such as images that you know will never be served in a large format, don’t hesitate to do so.
Finally, we ensured that our platform supports WebP, and will automatically serve WebP images if the user’s browser supports them. I discussed the benefits of WebP in the last dev dive, but I’ll give a quick recap here. WebP has the best of both JPEG and PNG rolled into one. The file sizes are small, even smaller than JPEGs with the same quality, but WebP still supports transparency and looks good with graphic design and logos, like PNG. It is, without a doubt, the best image format for the web, but not all browsers support it. We’ve set our platform up so that we can serve all images as WebP if supported, but still fall back on the JPEGs and PNGs if not.
Now for the exciting stuff – what’s coming next? Our biggest goal coming up is to actually serve different sizes of the same image based on use case. For instance, it’s a great idea to upload nice photos of the pieces on your site. On the jewelry details page, this means that the piece can be shown off and look great. But on the grid, where the piece is never going to be shown in high resolution, all that does is to increase load time. Right now, we’re working on automatically resizing images to multiple resolutions, which then allows us to serve smaller images for performance while being able to still show off the high resolution when necessary.
This was another long one, but again, images are one of the most important parts of your website, both in terms of performance and user experience. Getting images right is a huge piece of the puzzle, and we want to give you all the tools that we can. Let me know if you have any questions, and if you have suggestions for future posts, put them in the comments!
Hey everyone! For our dev dive today, I want to talk about images. Chances are that if you’ve ever looked into optimizing your website, images are the first issue mentioned. As a professional photographer myself (when I’m not developing websites, of course), making sure that my images look great is really important to me, but the unfortunate reality is that the nicer your image, the longer it’s going to take to load. So today, as the first part of my two-part series on images, I want to talk about what you can do to make your images as speedy as possible while still looking excellent. We’ll be talking about the three factors that determine the size of an image (resolution, file format, and compression level) and how they can be optimized.
First up: resolution. Most of you probably know this already, but this is the measurement of how many pixels make up your image. With a couple exceptions, such as SVG, all images work exactly the same for every file format. A 600x600 JPEG has no difference in resolution from a 600x600 PNG, TIFF, etc. Smaller resolutions make for smaller file sizes, because each extra pixel is one more piece of data that needs to be stored. The trick here is to know the maximum resolution that each image will be served at, and to size your images accordingly. There’s no point in uploading an image that’s 3000x2000 if it’s only ever going to show up as 150 pixels wide.
File format is a bit trickier, because there isn’t really a one size fits all answer here. All image formats have their own strengths, so I’ll briefly touch on the biggest formats and their main pros and cons.
JPEG: JPEGS are great for photographs. The compression algorithm for JPEGs is specifically designed for real photos, meaning that JPEG file sizes tend to be smaller than other formats while still looking good – in most cases. Images with a lot of flat colors or simple shapes (think logos or other graphic design) tend to show compression artifacts if compressed too much as a JPEG. JPEG also doesn’t support any kind of transparency, so it’s definitely not recommended for logos.
PNG: PNGs are best for logos and graphic design. The compression isn’t as aggressive as JPEGs, so artifacts don’t show up nearly as easily, and PNGs support transparency! However, because the compression isn’t as aggressive, PNGs are generally larger than JPEGs for similar images, so make sure that you need the extra benefits of PNG if you’re going to use it.
WebP: The newest format here, WebPs is the best of both worlds. They’re even smaller than JPEGs, but support transparency and still work well for logos. Unfortunately, a lot of browsers still don’t support WebP. Punchmark is actually putting native WebP support into our platform, so if someone visits your site on a browser that supports WebP, we’ll serve all of your images in that format. Otherwise, we can still pass through JPEGs and PNGs. Essentially, don’t worry about uploading in this format – if it makes sense to use, we’ll use them automatically.
SVG: This one’s a bit weird - SVG stands for Scalable Vector Graphics. The key word there is Vector; SVGs don’t use pixels. The plus side of this means that SVGs naturally scale as large or small as they need to and the file sizes are generally less than half that of a JPEG or PNG, but they really only work well for simple images. Support for SVGs is also somewhat unreliable and you only get the benefits if your image was designed as an SVG in the first place, so unless you know what you’re doing, best to avoid these. We do have a couple of clients who use SVGs for their logos, however, so at least know that it’s an option.
Compression is the final piece of the puzzle, and the one that will generally take the most trial and error. If you edit your image in a fully featured editor such as Photoshop or GIMP, you’ll notice that when you save your image, there are options for the level of quality that you want. In general, you want to go as low as you can until you just start to notice the image quality degrade. Even if there are a few artifacts, almost nobody will notice when visiting your site. But also, keep in mind the use case for your image. If it’s the first slide on a full-page banner, you may want it to look just a bit nicer. If it’s a stock image on a landing page, you can probably get away with it not looking quite as good.
This has been a bit of a long one today, but images are very important to me and I hope that everyone learned something! Next week I’ll be going over how we handle your images on our end, and the optimizations that we have in place now as well as plans for the future.
Hey everyone! Slightly different kind of post today, but I thought it would be fun to go through and give you a brief introduction to the tech team at Punchmark and what we all do. Currently, we have six full-time developers, and each of us lends something unique.
I’ll start with myself – I came on to Punchmark’s team about 3.5 years ago just out of college, and I work on the back end of the platform. Along with Ross, I specialize in maintaining and upgrading our EDGE integration, as well as working to build out features and improvements for Version 6. My strengths lie in designing algorithms as well as processing, managing, and transforming data.
Our other back-end developer, Sean, just joined early this year. He came to us after helping build a startup called Passport, and has a ton of experience designing infrastructure that’s both fast and scalable. In addition to the features he’s added to Version 6, a lot of the performance optimizations that we’ve made recently have been worked on by Sean.
Our CTO needs no introduction. Bryan came on to Punchmark in the middle of Version 5’s life cycle to help grow the platform and make it better than ever. Bryan cut his teeth as lead developer of a tech startup in NYC, and brings that experience here. He designed the fundamentals of the Version 6 platform and is a pro at architecting and building challenging projects.
On the front end, we have Kyle. Before coming to Punchmark early this year, he worked as a freelance designer and web developer, and has an incredibly sharp eye for design. Kyle works to code out a lot of client websites, and lends his design experience to polish and improve the UI of our platform. Kyle has helped to build some of our coolest new features, and you’ll see even more in the near future.
Co-founder, CPO, and front-end expert – Dan is absolutely integral to the Punchmark team. Until Kyle came on, Dan was not only the sole front-end developer, but also led the design process for the entire platform. Almost everything that you see and interact with was, in some way, built by him. Dan also closely follows website trends to make sure that Punchmark’s platform design is always up to date.
Finally, our co-founder and CEO, Ross. Up through the beginning of Version 5, Ross was the sole back-end developer at Punchmark. Today, he’s taken a step back from coding on the platform and lends his years of expertise toward maintaining our servers, helping to improve our EDGE integration, and of course, running the company on top of all of that. None of us would be here if it weren’t for Ross, and he continues to be invaluable.
Hey everyone! As you know, we’ve recently been making some big pushes toward improving page speed on our platform. Because of that, I wanted to talk quickly about why page speed scores are important, but also the ways that it’s often misunderstood – and even the ways that Google lies to you about it.
Page speed scores, as I’m sure you know, are just an approximation of how quickly your site loads. This is obviously important for giving customers the best experience possible - no one wants to wait for a slow website. And site speed does affect SEO, because Google will rank sites with a healthy bounce rate, more time spent on the site, and higher page interaction better. Faster websites tend to do better in all of these metrics.
All that said: the Google page speed score itself has no influence on your website’s ranking. If the bounce rate is healthy (Katie will be talking more about bounce rate in her post on Monday) and people are browsing your site a lot, having a low page speed score won’t keep your site from ranking well. This gets into some of the downsides of the metric: it’s not always perfect at predicting how fast or how good a site actually feels to use. Because websites can be built so many ways, Google can’t always perfectly analyze how the load time will translate to user experience. So while the page speed score is important, it’s also worth taking with a grain of salt.
Then, we have the ways that Google embellishes the page speed recommendations. Here’s one example: you’ve likely seen Google recommend that you serve images in “next-gen formats,” and that doing so can save 30, even 45 seconds on page load. A quick sanity check will tell us that, on a page that only takes 15 seconds to load completely, nothing can actually save us 30 seconds of page loading. What’s happening here is that the next-gen image format they’re talking about, WebP, was actually developed by Google. And in fairness to them, it is a great format. Optimally compressed WebP images can be about 30% smaller than JPEG images with the same image quality. Google, however, doesn’t only want you to use it because it’s faster; they want you to use it because they want to develop and control the technologies that run the internet. By exaggerating certain aspects of page speed and steering you toward their own solution, they can do exactly that.
So when looking at your page speed, keep these things in mind. This isn’t to say that a low page speed score is okay – you always want to make your site as enjoyable as possible for users to browse. But do remember that page speed is simply one predictor for the actual metrics that determine your search ranking, rather than a tool for ranking in itself. If you have more questions about this, feel free to put them in the comments, and if you have suggestions for more dev dives, let me know!
Hey everyone! Today, I’m going to talk about one of the unfortunate realities of any piece of software: bugs. We want our platform to be the best that it can be, so we really appreciate when our clients let us know when they’ve found issues. Bugs are frustrating, so I want to discuss a bit of the process that goes on behind the scenes when you report a bug to our customer success team.
The first step to fixing any bug is to figure out exactly what’s wrong. The most helpful thing that you can do for us is to provide us with as much detail as possible about the issue you’re seeing. For instance, bug reports saying something like “my items are broken” are much less useful than saying “the videos from my last EDGE import aren’t showing on the website.” If you provide us with as much detail as possible from the start, there’s a much lower chance that we have to reach back out for more information, which always slows down the process. Some helpful things that we almost always like to know are exactly what’s broken, where you’re seeing it, the time that you first noticed it, and the browser you’re using. We always prefer being given information that we don’t need rather than having to ask for more, so any detail you can think of is appreciated when submitting a bug report.
Once the bug is tracked down, we can determine the steps to fix it. Not all bugs are created equal – sometimes, it may be as simple as a quick update to a local file in your folder. Other times, we may have to rewrite entire parts of the platform (such as our recent debacle with Facebook’s API). We hope to solve most bugs within a week or two. If a bug looks to be especially challenging, our customer success team will reach out with a rough timeline of when we expect to be able to fix it.
Finally, once the development team has fixed the bug, we like to go through a round of testing to be sure that everything is looking good. Because there are so many variables to bug fixes, it’s actually pretty common for the bug to look fixed to the developer working on it, but still be broken for others, or for the bug fix to accidentally break other parts of the platform. To mitigate this, we go through a couple rounds of internal testing – first with other developers, then with our customer success team. Then, we’ll reach out to the client who reported the bug and ask them to test. If everything looks good, we release the fix platform-wide.
I hope this has been helpful! If you take one thing away from this post, it’s this: as much detail as possible when you report to us! It really helps us solve your issues as quickly as possible. If you have any questions, leave them in the comments!
Hey everyone! Today, I thought I’d give you a small peek into one aspect of our development process: designing new features, and making sure that those features don’t cause unforeseen problems. Today, I’ll be using one of our biggest recent releases as an example: EDGE EDT bi-directional integration. Single-directional integrations, such as TPW, have only one side (the EDGE) send data to the other (Punchmark) and are relatively simple. If we see new data on the website uploaded from the EDGE, we make those changes on our end, and that’s about it. But some interesting issues come up when you allow both sides to send data back and forth.
Let’s say you have a store with two locations, and the only way to manage inventory between them is by calling the other store with changes you make. In most cases, this is fine. But what happens, for instance, if both stores decide to put a piece on sale at the same time, but for different amounts? Each store calls the other to let them know of the price change, each store updates to the other store’s price, and now you have two different prices for the same piece! In the tech world, this is called data fragmentation, and it’s one of our biggest hurdles. When we design a two-way integration, we have to account for the potential of data fragmentation between the EDGE and Punchmark so that your data is always the same in your POS system and your website.
So how do we solve this issue? One of the easiest things we can do is to simply compare the exact times that the changes were made. If we go back to our analogy, both stores could tell the other not only the new price, but also the precise time that they updated the piece. This way, we can always use the newest value, and ignore an update if it's older than the one we currently have. Additional data can also be passed along about the way the piece was edited, with both sides agreeing on a set of rules that are used to determine which set of data is correct. There are lots of potential ways that data can become fragmented, so the rules used get equally complex.
I hope you enjoyed this peek into how our dev team solves problems! If you want to hear more about this, or if there’s anything else you want to know about our technology, let me know in the comments and I may write a future dev blog about your topic!
Hey everyone! Have you ever wondered why, when you call our customer success team, we often ask what browser you’re using? A lot of people think that all browsers will work for all websites, but that isn’t actually true.
In a perfect world, browsers would all show websites in the same way. Unfortunately, that’s not the case – different browsers are all built by different companies, each of them with different ideas of how best to render webpages. On top of that, new features and bug fixes are often being released for the languages that run in the browser (JavaScript and HTML), and browsers aren’t always updated to support these features. This means that web developers will sometimes have to actually write a certain piece of code in multiple different ways in order to ensure that it’s compatible with every browser. Some browsers, such as Brave and Opera, have extra features such as built-in ad blocking that can cause even more issues.
Now, I need to say this: if you’re using Internet Explorer, I highly suggest that you switch to a different browser. Google Chrome and Mozilla Firefox are both great options, and Microsoft even has an official replacement for Internet Explorer called Edge. Because Internet Explorer is so old, it doesn’t support many of the newer features that have been added to JavaScript and HTML, meaning that you’ll be missing out on the best user experience on a lot of websites. In some cases, modern websites may not even support Internet Explorer at all.
But even Chrome and Firefox aren’t perfect. Chrome, especially early in its life, was known for having a lot of compatibility issues. This is because their JavaScript engine (called V8) was heavily optimized. This made it fast, but that optimization didn’t work with every feature that JavaScript offered. Nowadays, Chrome is such a heavily used browser that almost all websites have added in code to improve compatibility, but bugs are still being found.
So when we’re asking which browser you use, this is what we’re trying to figure out – is there an issue that we know your browser has with the code that we’re running? We do our best to make sure that our platform is compatible across any browser that you want to use, but it’s always possible that bugs you’re seeing are caused by specific browsers that we haven’t accounted for.
Hey everyone! Today I’m going to change it up a bit and not talk about our development process; because this week’s episode of In the Loupe is on social strategy, today we’ll be talking about one of the most discussed (but often not understood) social topics – “the algorithm.”
Chances are that if you’ve spent any time on Facebook or Instagram, you’ve heard of it. Both Facebook and Instagram stopped showing posts chronologically years ago, and now use a machine learning algorithm to try to show you the posts that it thinks you’ll enjoy the most. This means that social media experts or marketing experts like our own Katie Kinlaw can use this to their advantage and maximize the effectiveness of social strategies. But first, we have to know – how does this mysterious algorithm even work?
In the most basic terms, machine learning algorithms are just complex pattern finders. They work over massively large sets of data (millions or even billions of data points). Developers writing the algorithm will specify several factors that they think will contribute to predicting the outcome that they want, and the computer will do some crazy math to try and find patterns that it can use to predict future results. In the case of Facebook and Instagram, this essentially means that they use your previous interaction to show you new posts that they think you’ll enjoy.
There are a lot of different machine learning algorithms that you can use to try and make these predictions (if you’ve heard of neural networks, those are the most popular right now), but what they all have in common is that the developers pick which criteria they want to focus on, and how heavily they want to weight these criteria. How developers select what is important to them can make a shockingly big difference in how the algorithm performs. For instance, Facebook used to prioritize posts from friends and family, as well as content that it deemed to be informative or entertaining. However, in 2018, they announced that they would be focusing more heavily on posts that get high levels of engagement. As a result, a study in 2019 showed that engagement on Facebook had gone up a whole 50% because of the change, but it also found that the best performing content was controversial, which led to an increase in divisiveness on the platform.
What this means for you is that if you want your social strategy to be effective, it’s important to pay attention to what factors contribute to being ranked highly by their own algorithm. It’s also important to remember that the criteria for being successful are changing all the time, so you’ll need to adjust accordingly!
Hey guys! We’ve been talking a lot about digital marketing in our other channels of communication, so I thought it might be interesting to go over some of the tech behind digimark. There are a lot of types of digital marketing out there, but I’ll be going over one of the most ubiquitous: retargeting.
First off, for those who don’t know, retargeting is used to serve ads to customers who have interacted with your site by viewing a product, adding items to their cart, etc. The chances of converting a first time user on your site are low, and customers who do convert are more likely to be repeat buyers, so keeping your brand image in front of customers after they leave increases the likelihood that they’ll come back to your site later.
Retargeting works using browser cookies. Not to be confused with the delicious kind of cookie, a browser cookie is a piece of data dropped into users’ browsers when they visit a webpage. By default, most browsers don’t automatically clear cookies, meaning that even if a user only visits your site once, there’s potential to retarget them well into the future. Because of their longevity, cookies are also useful for a lot of cases where a website might want to remember your identity, such as checking to see if you’re logged in. We use a similar technology on Punchmark websites to aid in all kinds of site functionality.
Once the cookie is placed in the user’s browser, retargeting providers such as Google can use that cookie to determine what ads to serve to the user. This means that once they leave your site and browse elsewhere, Google can know to serve relevant ads to them that will point them back to your site. This helps keep your brand relevant in the user’s mind and dramatically increases the chance of future conversion.
Retargeting is only one form of targeting that’s used in digital marketing – if you want to know more about those, let me know!
Hey everyone! As we start talking more about the technical aspects of your website, I figured that this week, it would be good to go over the difference between front end code and back end code and how they relate to your website. Knowing the difference is a fundamental step toward understanding the ins and outs of your website.
Front end code runs everything that you see and interact with. The structure of the page, the fonts, the animations – these are all part of the front end. The languages that we use on the front end are:
HTML (Hypertext Markup Language): this gives the page structure. The layout and flow of the page is all defined in the markup. Paragraphs, images, links, and more are all part of the HTML.
CSS (Cascading Style Sheets): this makes your site look good. CSS takes the HTML structure of the page and defines variables such as fonts, colors, the size of elements, etc. Without CSS, all webpages would look like they were straight out of 1992.
Javascript: this handles most of the dynamic content on the page, and communicates with the back end code. Anything that loads new data, handles special actions when clicking buttons, etc. is run through the Javascript.
Our main front end developers are Dan and Kyle. They’re experts at taking designs from Mike and Sarah and bringing them to life in the code.
Back end code runs behind the scenes. Any time you’re updating your site, grabbing or saving data, that’s the back end code at work. The languages we use on the back end are:
MySQL (SQL stands for Structured Query Language): this is what we use to store, organize, and grab data. MySQL is what’s known as a relational database system – you can create tables for different parts of the site (items, categories, etc) that are somewhat similar to Excel tables, but you can also link the tables using queries to grab data in very dynamic ways.
PHP (PHP: Hypertext Preprocessor): this language does the heavy lifting on the back end of the site. PHP is responsible for getting everything together, including the HTML, CSS, Javascript, and data from MySQL and passing it back to your browser.
Bryan, Ross, Sean, and I are the main back end developers at Punchmark. Our job is to get all of the data together, then pass it to Dan and Kyle to make it pop.
Hopefully this helps you understand a bit more about front and back end code and what they’re useful for. If you have any questions about this, feel free to ask!
Hi all, I'm Kyle. I'm one of the developers here at Punchmark. I'll be sharing with you my technical thoughts and tidbits of knoweldge on front-end development. My hope is to educate those who are interested to obtain a better understanding of how a website works, beyond what is seen.
Punchmark has a robust software that is bridging the gap for their customer base to be hands on with managing digital needs. There's a huge barrier to entry to grasp any given programming language. Then, there's engineering something useful with it once you have an understanding. I would compare it to formulating a captivating story told in a foreign language you're learning to speak. You start with single words, fumble through sentences, and eventually, over time with a lot of practice, you're voice is heard. Front-end development is the digitial translation of data and ideas. Let's start with A,B,C...
(A)RCHITECTURE
The digital landscape is similar to a physical one. There is architecture, art and science behind building, and we need materials to build. We refer to our materials as a STACK. Here's a list of resources we build with:
PHP - programming language
jQuery - javascript library
MySQL - relational database management system
Amazon CloudFront - content delivery network
These resources go hand in hand with PageBuilder2:
(B)UGS
There will always be bugs. There's an element of trial and error involved with every task. In the front-end world, the errors are referred to as bugs. It's vital to understand why things work the way they do in order to successfully troubleshoot the issues at hand. This comes with developing a healthy appetite of asking why. Patience is also good practice because there are a lot of moving parts. The desire to learn and understand deters frustrations of demanding expectations. This is what we're after. We genuinly want to help and accomodate all of our clients, but it's how we go about it, working together, that makes a difference.
(C)RITICAL THINKING
Methodically thinking through what is will get us to where we want to be. When you're trying to solve any problem, critical thinking is common practice. You run scenarios through your head to eliminate possibilities leading you to the answer. The why answers the what, how, where, and when. After solving every problem, take the child-like approach to relentlessly asking why:
What is happening? But why is it?
How is it working/suppose to work? But why does it?
Where is it located? But why is it?
When does it occur? But why does it?
Ask us too, and we'll ask you. The whole point is we have to work together to figure it out. These are the stepping stones to teach what we're working with so that we can work better together.