Here's my must-have list:
I've asked on Twitter multiple times, and the most recent time I asked, I got a response from my friend Jason Beggs suggesting I should look into prettier-plugin-blade.
So I set out, with Jason's help, to get it working on a new Laravel app, later receiving some help from the tools' creator, John.
What I found is that the ideal solution to my wish list above is using prettier-plugin-blade
together with Tailwind's official Prettier plugin. It works on CI and the CLI, has IDE integrations, works in Blade files, and uses Tailwind's official class ordering.
Let's dig in!
prettier
, prettier-plugin-tailwindcss
, and prettier-plugin-blade
with npm install
npm install --save-dev prettier-plugin-blade@^2 prettier prettier-plugin-tailwindcss
.prettierrc
config file to your project rootHere is the most basic file you'll start with, before you start actually configuring Prettier:
{
"plugins": ["prettier-plugin-blade", "prettier-plugin-tailwindcss"],
"overrides": [
{
"files": [
"*.blade.php"
],
"options": {
"parser": "blade"
}
}
]
}
On top of this, you can add your own lines to configure Prettier; here are those I've added, but you can find more here.
{
"printWidth": 120,
"semi": true,
"singleQuote": true,
"tabWidth": 4,
"trailingComma": "all",
"plugins": ["prettier-plugin-blade", "prettier-plugin-tailwindcss"],
"overrides": [
{
"files": [
"*.blade.php"
],
"options": {
"parser": "blade"
}
}
]
}
printWidth
: Defines the width (in characters) you'd like your lines to reach; this is not a hard limit, but a general guideline to Prettier. Prettier recommends 80. I'm not sure what I'll land on; I'm using 120 for now because that's what was in Jason's original config file.semi
: Adds semicolons at the end of all lines in JavaScriptsingleQuote
: Uses single quotes when possible in JavaScripttabWidth
: Sets the number of spaces in each level of indentation.trailingComma
: Defines when to put trailing commas in multiline structures.prettierignore
fileI hacked this together using Jason's and some other references online, but I can't say this is perfect. Like .gitignore
, everyone will have their own way of doing it.
node_modules
dist
.env*
vendor/
/vendor
public/
.git
**/.git
package-lock.json
composer.lock
Now that we have the package installed, let's take a look at what we can do with it.
First, if you want to run Prettier, you have a few options. The most common is to fix your issues. You'll run npx prettier --write
and pass the directory you're fixing. I'm only using Prettier for frontend code, so I'll pass it the resources/
directory:
npx prettier --write resources/
If you want to check whether your code passes, but not actually format anything (most common in CI), you can instead pass the --check
flag, which will return a failing code if your code isn't formatted correctly:
npx prettier --check resources/
So let's take a project where I have a single file with a Tailwind-styled div
, living at resources/file.html
:
<html><body>
<div class="mb-2 font-bold p-2">
Content!
</div>
</body></html>
If I run npx prettier --check resources/file.html
, it'll fail:
± npx prettier --check resources/file.html
Checking formatting...
[warn] resources/file.html
[warn] Code style issues found in the above file. Run Prettier to fix.
And if I run npx prettier --write resources/file.html
, it'll fix it:
± npx prettier --write resources/file.html
resources/file.html 257ms
And now we've got a fixed file:
<html>
<body>
<div class="mb-2 p-2 font-bold">Content!</div>
</body>
</html>
That's it!
One of the benefits of using this package specifically is that it understands Laravel-specific contexts. For example, if you have strings inside of Blade, it'll fix them as well. As you can see, it'll re-order the classes here even when they're inside a Blade string, turning this:
<div class="{{ 'mb-2 font-bold p-2' }}">Content!</div>
to this:
<div class="{{ 'mb-2 p-2 font-bold' }}">Content!</div>
You can learn more about the specific benefits of this package, and how to customize some of its unique settings, on its documentation site.
At Tighten, we use Duster, which includes Pint, so this isn't a feature I use, but if you use Pint on your projects, you can also hook that into your Prettier runs.
You can learn more in the documentation, but you'll essentially set useLaravelPint
to true
in your .blade.format.json
configuration file, and then the Prettier plugin will run Pint as a part of any calls to it.
There are a few other ways you can use this package other than running it manually on the command line.
If you want to make it easier to manually (or programatically) run your format commands, or to give directions to future devs how and where to run it, you can add a script to your package.json
file:
{
// ...
"scripts": {
// ...
"format": "npx prettier --write resources/",
}
// ...
}
You can now run npm run format
to trigger your run.
Husky is a popular JavaScript package that makes it easy to manage your Git hooks. At Tighten, we often use it to ensure everyone on the project remembers to run code formatting tooling with each commit.
For a tutorial on how to set up Husky, check out Tower's guide to installing Husky. Once you have Husky set up, you can add a step to run Prettier as a part of your Git workflow; here's my lint-staged.config.js
(which I should probably tweak to only run on files in resources/
):
export default {
'**/*': 'prettier --write --ignore-unknown',
};
If you want to set up VS Code to correctly parse your Prettier config with this plugin, you have to do a tiny bit of configuration. I've seen a few folks online saying you have to trick VS Code into treating Blade like HTML, but the plugin's author, John, told me this is the correct answer: Open up your VS Code configuration and add the following configuration items:
{
"editor.defaultFormatter": "esbenp.prettier-vscode",
"[blade]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"prettier.documentSelectors": [
"**/*.blade.php"
]
}
This, of course, requires you to have the Prettier VS Code plugin installed. Once you have the plugin installed, you can run Prettier by triggering "Format Document" from the command palette, or you can set the editor.formatOnSave
setting to true
and then it'll run Prettier every time you save a file.
If you want to set up Prettier in your GitHub actions workflow, you have two options: Fail the action if it's not correctly formatted, or just format it on the server.
If you want to format your code as a part of your GitHub Actions workflow, check out this tutorial.
If you want your Prettier step to fail if the Prettier check fails, check out this tutorial.
That's everything!
Do you have any questions, or any preferences for how you like to set up your Prettier? Let me know on Twitter!
]]>However, this site is powered by a static generator, so the data on that page will go out of date pretty soon after each deploy.
I considered setting up a local cron job on my server to run the deploy script once a day, which is certainly my easiest option. But I didn't want to duplicate my deploy script in cron that I had already configured in my host, Laravel Forge.
Thankfully, Forge gives you an easy webhook to trigger your existing deploy script, so I figured that'd be my cleanest option.
I'll want to call my webhook, which we'll approximate as https://forge.laravel.com/webhooks/my-webhook-yay
, once every day.
First, I set up a new GitHub Action workflow in my project. I created a YAML file for the workflow at .github/workflows/deploy-on-schedule.yml
, with the following basic contents:
name: Deploy Every Day
To learn more about how this works, check out GitHub's official documentation on using GitHub Actions.
There are two main ways to call a URL from a GitHub Action workflow: with curl, and with a published GitHub Action. I want you to see how simple it is to call curl yourself, but it's no faster than the GitHub Action, so use whichever is more comfortable for you.
You can call any arbitrary shell code directly from your workflow, so you can build a curl command directly in your action:
jobs:
webhook:
name: Ping webhook
runs-on: ubuntu-latest
steps:
- name: Use curl to ping webhook
run: |
curl -n "https://forge.laravel.com/webhooks/my-webhook-yay"
If you want to add headers or extra content to the request, you can flesh out your curl command as much as you want, passing \
at the end of each line:
jobs:
webhook:
name: Ping webhook
runs-on: ubuntu-latest
steps:
- name: Use curl to ping webhook
run: |
curl -n "https://forge.laravel.com/webhooks/my-webhook-yay" \
--header 'Content-Type: application/json' \
--data '{"some-datadata":"here"}'
webhook-action
There's also a GitHub Action dedicated to calling external URLs, joelwmale/webhook-action
.
Here's the syntax for what we're doing here:
jobs:
webhook:
name: Ping webhook
runs-on: ubuntu-latest
steps:
- name: Use webhook action to ping webhook
uses: joelwmale/webhook-action@2.3.2
with:
url: https://forge.laravel.com/webhooks/my-webhook-yay
If you want to learn more about how to use this action, including how to pass headers and data, check out this introductory post.
Now that we've built out the ability to call a webhook from our workflow, how do we call a workflow on our own schedule?
If you're familiar with GitHub Actions, you're likely familiar with the on
property, which allows you to define what events trigger this workflow running. Normally, we'd attach it to Git events—push
to a certain branch, for example.
But we can also define it on a schedule, using the same syntax we use for cron:
on:
schedule:
- cron: "0 0 * * *"
The above schedule will run our workflow once a day, at the end of the day.
You can take a look at the GitHub Actions docs for schedule if you'd like to learn more.
So let's put these together into a single workflow file:
name: Deploy Every Day
on:
schedule:
- cron: "0 0 * * *"
jobs:
webhook:
name: Ping webhook
runs-on: ubuntu-latest
steps:
- name: Use curl to ping webhook
run: |
curl -n "https://forge.laravel.com/webhooks/my-webhook-yay"
Or, if you want to use the Action:
name: Deploy Every Day
on:
schedule:
- cron: "0 0 * * *"
jobs:
webhook:
name: Ping webhook
runs-on: ubuntu-latest
steps:
- name: Use webhook action to ping webhook
uses: joelwmale/webhook-action@2.3.2
with:
url: https://forge.laravel.com/webhooks/my-webhook-yay
That's it for building out our main workflow! But I have two more tricks to share that helped me in building this action.
workflow_dispatch
If you're building a workflow that is only triggered in certain circumstances (whether it's a given push, or on a schedule), you may want to manually trigger it some times, especially when you're first building it out. But how?
There's another event (defined under the on:
property of our config) that will serve us in this circumstance: workflow_dispatch
. There's a lot of work you can do to customize workflow_dispatch, but for now let's just enable it without parameters:
name: Deploy Every Day
on:
workflow_dispatch:
schedule:
- cron: "0 0 * * *"
jobs:
webhook:
name: Ping webhook
runs-on: ubuntu-latest
steps:
- name: Use curl to ping webhook
run: |
curl -n "https://forge.laravel.com/webhooks/my-webhook-yay"
Now that we have that entry there in our YAML, we can manually trigger a run of this workflow. Open up your repo in GitHub; choose the "Actions" tab, and choose your relevant workflow in the left section. You'll now see a banner saying "This workflow has a workflow_dispatch
event trigger.", with a "Run workflow" button next to it. You can use this button to manually trigger runs of this workflow!
Let's say you want to extract the specific URL out of your code and instead store it in GitHub secrets. Let's take a quick look at how you'd do that.
First, for the GitHub Action, which is simple:
jobs:
webhook:
name: Ping webhook
runs-on: ubuntu-latest
steps:
- name: Use webhook action to ping webhook
uses: joelwmale/webhook-action@2.3.2
with:
url: ${{ secrets.webhook_url }}
It's a bit more complex to pass an environment variable into bash, but it's still quite manageable:
jobs:
webhook:
name: Ping webhook
runs-on: ubuntu-latest
steps:
- name: Use curl to ping webhook
env:
WEBHOOK_URL: ${{ secrets.webhook_url }}
run: |
curl -n "$WEBHOOK_URL"
While I'm using Laravel Forge as the context here, this post is really just about how to schedule GitHub Actions workflows and how to call URLs within them. However, if you are using Forge, Tightenite Guillermo Cava Nuñez pointed out that Forge has its own GitHub Action.
Additionally, the Forge documentation points out how to use the Forge CLI to run more than just deploys in your GitHub Action workflows, if you're interested.
That's it! I hope you learned something useful!
]]>I normally use Laravel Breeze to build my projects, but recently I started a new project that needed team support, so I figured I'd finally use Jetstream on a real project, instead of just for fun.
There's a lot I really like about Jetstream, but one thing that bothers me—I know, it's not super reasonable, but whatever—is that it publishes so many tests out of the box. I want them to run in CI, so I can be confident the things they're testing are still covered, but I'd rather skip them locally.
But... how?
PHPUnit allows you to define one or more testsuites
, each of which point to one or more files, and which can be given a name.
In Laravel, here's the default configuration (from phpunit.xml
) (with extraneous information removed):
<?xml>
<phpunit>
<testsuites>
<testsuite name="Unit">
<directory>tests/Unit</directory>
</testsuite>
<testsuite name="Feature">
<directory>tests/Feature</directory>
</testsuite>
</testsuites>
So, we're loading two sets of tests, each of which gets a nickname.
Since I want to treat the Jetstream tests as separate from the others, I realized the best way to accomplish this is to move them out of the tests/Feature
folder and into their own folder, and their own testsuite. So, I moved all of the default Jetstream tests from the tests/Feature
directory into a new folder tests/Jetstream
instead, and added a new testsuite attached to that folder:
<?xml>
<phpunit>
<testsuites>
<testsuite name="Unit">
<directory>tests/Unit</directory>
</testsuite>
<testsuite name="Feature">
<directory>tests/Feature</directory>
</testsuite>
<testsuite name="Jetstream">
<directory>tests/Jetstream</directory>
</testsuite>
</testsuites>
Note: If you're using Pest tests, you can move these files and they'll just be good to go. If you're using PHPUnit tests, you'll also have to modify the files to update their namespaces from
Tests\Feature
toTests\Jetstream
.
Now, we can control when our Jetstream tests are and aren't running.
If we want to run just one testsuite, we can pass that suite's name to the --testsuite
flag:
php artisan test --testsuite Unit
We can also choose to exclude a test suite:
php artisan test --exclude-testsuite Jetstream
However, what I want is to define that that test suite should be ignored by default; that's not something we can do from the command line.
defaultTestSuite
Tighten's Keith Damiani pointed me in the direction of the defaultTestSuite
configuration in PHPUnit. It allows to you define which test suites run when you don't pass the --testsuite
or --exclude-testsuite
parameters.
Now that we have our Jetstream
testsuite separated, we can exclude it by default, by setting the defaultTestSuite
property on the base <phpunit>
declaration in phpunit.xml
:
<phpunit xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="vendor/phpunit/phpunit/phpunit.xsd"
bootstrap="vendor/autoload.php"
colors="true"
defaultTestSuite="Unit,Feature"
>
If your app is using Pest, there's a line in tests/Pest.php
that loads the tests in tests/Feature
with RefreshDatabase
as a trait:
uses(TestCase::class, RefreshDatabase::class)->in('Feature');
Since the Jetstream
tests expect a database, you'll want to duplicate that line for the Jetstream tests, and end up with the following:
uses(TestCase::class, RefreshDatabase::class)->in('Feature');
uses(TestCase::class, RefreshDatabase::class)->in('Jetstream');
Here are my steps, simplified:
tests/Feature
folder into a new tests/Jetstream
folder.testsuite
for that folder in phpunit.xml
:<xml>
<phpunit>
<testsuites>
<!-- ... -->
<testsuite name="Jetstream">
<directory>tests/Jetstream</directory>
</testsuite>
</testsuites>
Pest.php
to load that folder separately:uses(TestCase::class, RefreshDatabase::class)->in('Feature');
uses(TestCase::class, RefreshDatabase::class)->in('Jetstream');
phpunit.xml
to run only Unit
and Feature
testsuites by default:<?xml version="1.0" encoding="UTF-8"?>
<phpunit xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="vendor/phpunit/phpunit/phpunit.xsd"
bootstrap="vendor/autoload.php"
colors="true"
defaultTestSuite="Unit,Feature"
>
That's it!
Now, when I run php artisan test
, it excludes the Jetstream testsuite by default; but if I want to run those tests myself in CI, I still can, with php artisan test --testsuite Jetstream
.
I put up my slides from my AlpineDay talk, "From Vue to Alpine: How & Why":https://t.co/xTzIl0FGun
— Matt Stauffer (@stauffermatt) June 10, 2021
And also the files I used to test performance:https://t.co/tB5EThfEEJ pic.twitter.com/Jv8P7iDBcv
In the talk I shared a few situations in which I've moved applications that were previously using Vue to use Alpine instead, for the sake of Google's Core Web Vitals. These sites were using Vue for light enhancements on top of a server-rendered app, and I found that Alpine is lighter to load and parse in these cases.
I still stand by everything I said in the talk. However, a large swath of the Internet discovered my slides, hadn't heard the talk, and assumed I was badmouthing Vue.
So, I wanted to write a post about how we can optimize our usage of Vue for this same scenario! How do we minimize Vue's impact on the Core Web Vitals?
In my talk, I described working on the Tighten website to improve its Core Web Vitals scores.
For that particular site, we were using Vue for some very small components, and the combination of Vue's loading time and the components' initial processing time in the browser led to a significant delay for loading the page.
I discovered I could solve our problem by replacing those components with Alpine components, and realized — and then gave a talk about this idea — that Alpine is an ideal fit, even for a Vue-loving agency like ours, for enhancements above vanilla JavaScript and below heavy dashboards and SPAs.
However, many of our sites — and likely yours, if you're reading this — are ideal for Vue, so we're not just going to drop it. Instead, let's look: can we optimize our page load speeds?
There's a whole group of answers that have to do with code splitting and async loading, which you can find a link to down at the bottom of this post. But there are two strategies that are very similar to what I talked about in my talk, so they're the ones I'm going to cover in this talk.
If you define your Vue components inline in your HTML (rather than in single-file components), it turns out this requires a heavier (larger file download) and slower (more processing power spent) version of Vue. This version has an in-browser compiler, which causes the impact to load size and processing time.
I talked briefly with Evan You, the creator of Vue, who taught me that the "runtime-only" version (meaning, with no compiler) of Vue is 2/3rd the download size of the full Vue. He also gave me two ideas for how we can shrink Vue's load size and processing time.
As I mentioned above, if Vue is going to have to compile Vue component definitions from your HTML, it has to include an in-browser compiler, which makes the script file larger and the processing time take longer.
So, your first step is to use a build tool (like Laravel Mix), pre-compile your templates, and then use the "runtime-only" build of Vue.
"Pre-compiling" in this scenario means using single-file .vue
components that you then process with your build tooling. These files are compiled into JavaScript, so the end user isn't delivered HTML that needs to be parsed, but a JavaScript file ready to be executed.
"Not pre-compiling" simply means placing your Vue template inline in your HTML; this may be convenient (and some times it's even necessary), but if you work this way you can't drop to the smaller Vue file.
In Laravel Mix, you can switch to the runtime-only build by passing {runtimeOnly: true}
to your mix.vue()
call:
mix.vue({runtimeOnly: true})
If you're using a CDN to load Vue (which I assume is unlikely if you're pre-compiling your templates, but what do I know?), your link likely looks something like this:
<script src="https://cdn.jsdelivr.net/npm/vue@2.6.14"></script>
However, you can get a minified version easily (which you should do no matter what):
<script src="https://cdn.jsdelivr.net/npm/vue@2.6.14/dist/vue.min.js"></script>
And if you are pre-compiling your templates, you can ask for the runtime-only version by appending runtime
after vue
in the file name:
<script src="https://cdn.jsdelivr.net/npm/vue@2.6.14/dist/vue.runtime.min.js"></script>
If you use other build tooling, you'll want to modify your build script from pulling the vue.whatever.js
file to the vue.runtime.whatever.js
file. You can take a look at Laravel Mix's source here to see how they do it.
Evan also shared another tip for speeding up Vue, especially when it comes to the processing time it takes on each page load.
If you're working with a server-rendered app and just adding a light touch of Vue here and there, don't mount to the body or a master #div
. If you do, Vue has to parse your entire application's body content and replace it with a new DOM on every page load.
Rather, bind Vue individually to just the spots where it's needed. Move from this:
new Vue({
el: "#app",
});
to this:
new Vue({
el: "#hamburger",
});
new Vue({
el: "#sliders",
});
There are a ton of additional considerations you can look at to improve your Vue applications' performance. These are just two that have very specifically to do with initial load time and processing.
Here are a few other worth digging into. There's a fantastic series on VueSchool called Vue.js Performance, which I would recommend and have linked in several of these items.
I asked around for ideas, and Eric Barnes mentioned that he uses an RSS feed to post to various social networks using Zapier. Perfect!
Note: Zapier is required by Twitter to scrub your tweets of any @ mentions, so if your tweets require that, Zapier won't work for you.
I'm building an app that I want to tweet when certain events happen. I was originally going to use the Twitter Laravel Notification Channel, but now my goal is to push a webhook up to Zapier and have Zapier post the tweet.
I'm working in a command that loops over all of the items, looks for those needing notification, and then pushes out the notification:
foreach ($this->itemsNeedingNotification() as $item) {
// Build and send tweet for $item
}
Log into Zapier and you will find yourself on the dashboard. You can fill out the various sections to figure out exactly what you want to do:
Or you can just click directly on this link: Create tweet in Twitter when catch hook in Webhooks by Zapier
Now we've got a custom Webhook URL. Copy that URL and paste it into your Laravel .env
file as ZAPIER_WEBHOOK_URL
.
Now, Zapier wants us to test the webhook, so let's pause there and get to writing some code.
Recent versions of Laravel have included an HTTP
component, the direct successor of Adam Wathan's ZTTP client, that make it super simple to send HTTP requests. Let's build one in our command:
...
use Illuminate\Support\Facades\Http;
...
class TweetStuff
{
public function handle()
{
Http::post(
config('services.zapier.twitter_webhook_url'),
[
'text' => 'Testing tweet text'
]
);
}
}
Note: Later in this tutorial, you'll want to actually tweet this message out to make sure your integration works, so you may want to replace "Testing tweet text" with something you're actually interested in tweeting.
Now, of course, we need to make that config
exist. Add ZAPIER_WEBHOOK_URL=
to the bottom of the .env.example
file, and then edit config/services.php
and add the following to the bottom:
'zapier' => [
'twitter_webhook_url' => env('ZAPIER_WEBHOOK_URL'),
],
OK. Let's test that out. Run your command, and go back and check the Zapier page. Click "Test trigger" and you should see your tweet text right there in the app!
Since it worked, we can now hit Continue. Accept the default (app "Twitter", Action Event "Create Tweet") and hit Continue again. Now you'll be prompted whether you want to auto-follow Zapier, then taken to the Twitter OAuth screen.
Now let's continue again!
You can write text around the tweet, but for me, I know I wanted to just use the tweet text from my webhook. To add text from your webhook, click in the "Message" box and choose "1. Text: Testing tweet text" from the dropdown. I also choose to not have my links shortened by Zapier.
Here's what it looks like:
Let's hit continue, and now we're ready to actually test the Twitter connection! Hit either button—I hit "Test & Continue"—and you can now go over to your Twitter account and look for that tweet.
Look at that! Beautiful! If you're happy with how it turned out, you can click "Turn on Zap", and you're good to go! Happy tweeting!
]]>Out of the box, Sail comes with MySQL, Redis, and MailHog. But what if you want to add PostgreSQL? ElasticSearch? Memcached? MsSQL? Or what if you have four Sail environments running, each using 300-400MB of RAM for their MySQL instances?
Never fear: Sail works great with Tighten's tool Takeout, a simple CLI tool for managing one-off Docker services.
As always, the Sail docs are your best option. But here's the simple rundown:
sail up -d
sail down
That's about it! You'll also want to pass any commands (for example, php artisan migrate
) through Sail: sail artisan migrate
, or sail composer require tightenco/tlint
. Or, you can run sail shell
to run Bash on the container, sail test
to run your tests, or sail tinker
to run Tinker.
The docs also show you how to connect to your MySQL container using your favorite SQL GUI.
Note: If you're a Docker pro, but still want to use Sail, you can publish your Sail config files by running
sail artisan sail:publish
and editing the files in the/docker
directory of your project. Then runsail build --no-cache
to update your Sail containers.
Takeout is also a tool for managing Docker, but it focuses on creating one-off containers for common services that can be shared by all of your local projects, whether or not those projects are using Docker.
To install Takeout, run composer global require tightenco/takeout
. Don't have Composer installed globally? No worries. Try Liftoff, the quickest way to get Composer and Takeout installed on your machine.
In order to spin up a service—let's say you want to use PostgreSQL—run takeout enable servicename
, or just run takeout enable
to choose from a list. So, for now, takeout enable postgresql
, follow the prompts, and you'll be up and running.
Note: If you're actually following along, it's much easier to work with PostgreSQL if you set a password, so for now I just set mine to
password
.
Now that you have both working, let's get them connected!
docker-compose.yml
to connect to the takeout
networkIn order to connect Sail and Takeout, we'll just add a "network" to your docker-compose.yml
file. Open that file in your favorite editor, scroll all the way down to the bottom, and under the networks
setting, add this entry:
takeout:
external:
name: takeout
Your networks
key should now look like this:
networks:
sail:
driver: bridge
takeout
external:
name: takeout
Then find the networks
keyed object under services:laravel.test:networks
(it's line 18 at the time of this writing), and add a new line for - takeout
, making it look like this:
services:
laravel.test:
[...]
networks:
- sail
- takeout
Save that file, and now run sail build --no-cache && sail up
. Your app should now have access to your Takeout-managed instance of PostgreSQL.
Every Takeout services gets an alias that you can use to refer to it in your Laravel configuration (.env
or elsewhere). You can find those aliases by running takeout list
, which should give you output like this (I removed some irrelevant columns to save space on the blog):
+---------------------------------+-------------+-------------------+
| Names | Base Alias | Full Alias |
+---------------------------------+-------------+-------------------+
| TO--postgresql--9.6.20--5432 | postgres | postgresql9.6 |
+---------------------------------+-------------+-------------------+
For any row that has both a Base alias
and a Full Alias
, I can use either to refer to this service in my .env
.
However, if I have more than one PostgreSQL instance running through Takeout, only one will get the Base Alias of postgres
, which is why there's the Full Alias if I need it.
So now, I can update my .env
and set DB_CONNECTION
to pgsql
, DB_HOST
to postgres
, DB_PORT
to 5432
, DB_USERNAME
to postgres
, and my password to the password I set when I spun it up with Takeout (password
) and that's it!
You can run sail artisan migrate
and see that it's now connecting to your Postgres database. If you've followed along, you'll actually see an error: SQLSTATE[08006] [7] FATAL: database "my_app" does not exist
, but that just a sign you need to create the database, which is out of the scope of this post. (TLDR: easiest option is to connect your SQL GUI to your new database and create it there).
That's it! Enjoy!
]]>vendor/
directory, but that folder is Git ignored, so that's not it...
There are three primary ways you can work on your package:
A friend told me today they've been building a custom provider for Laravel Socialite, a tool for adding social authentication to your Laravel apps.
So, let's use that as an example. How would I go about building a custom provider, with its own unique namespace, that I might eventually want to release as a package?
First, let's pick the namespace. Socialite just requires my custom code to extend a Socialite class and implement a Socialite interface, but it doesn't care what namespace I put it in. So, let's imagine I'm releasing a collection of custom Socialite providers.
I'll maybe imagine making a package on Packagist as mattstauffer/socialite-providers
, so the namespace would probably be Mattstauffer/SocialiteProviders
.
My class for today will be MaceBookProvider
, providing the ability for users to log into MaceBook.com
, the premier social network for medieval weapon aficionados.
Let's create the file now. It'll look something like this:
<?php
namespace Mattstauffer\SocialiteProviders;
use Laravel\Socialite\Two\AbstractProvider;
use Laravel\Socialite\Two\ProviderInterface;
class MaceBookProvider extends AbstractProvider implements ProviderInterface
{
// ...
}
But where does this file go?
It's a very common pattern to have a folder in your application with the folder name of src/
which contains your custom PHP code. That's one option—throw it all in there.
If you don't want to put it in src/
, because you already have plans for it, you can also create a new folder like src/
and name it something like packages/
. We'll assume for the rest of this article you've put it in packages/
.
You'll want to treat that packages/
folder as if it's representative of your top-level Mattstauffer
namespace. So, we'll add a subfolder SocialiteProviders
to represent our Composer repo, and then our file will live at packages/SocialiteProviders/MaceBookProvider.php
.
We're almost there! Now, we need to teach Composer that any files in packages/
should be treated as being in Mattstauffer/
and then use their directory tree and filenames to define the rest of their namespaces.
Thankfully, that's exactly how PSR-4 works! So we'll tell Composer to PSR-4 autoload the packages/
directory and map it to the Mattstauffer
namespace and we're good to go!
Note: You could be more precise by instead loading
packages/SocialiteProviders
as theMattstauffer/SocialiteProviders
namespace. Your call.
Edit your composer.json
and scroll down to the autoload
key. If you're using a modern framework like Laravel, you'll likely already see a PSR-4 entry there, looking something like this:
"autoload": {
"psr-4": {
"App\\": "app/"
},
So, let's modify that to add ours:
"autoload": {
"psr-4": {
"App\\": "app/",
"Mattstauffer\\": "packages/"
},
Run composer dump-autoload
and you're good to go!
You can stop right there and you'll be good to go! But... if you want to take your package loading to the next level, read on...
If you're definitely going to eventually distribute this package, the solution described here might not be enough. For example, your package might have Composer dependencies of its own. You want to keep that list separate, right, instead of just adding them to the parent application's composer.json
?
If this is the case, it's time for you to move up one level. You'll need to create a folder adjacent to your application's folder, and outside of the Git repository. (Watch a free video from my friend Marcel here)
So, if the package were named BestPackage
and the site were named BestProject
, they'd both be under the same parent directory, Sites
, like this:
\Users
\mattstauffer
\Sites
\BestPackage
\BestProject
To autoload your new package, you'll need to modify your composer.json
to treat the "path" to that folder (../BestPackage
) as a valid Composer source.
But first, you'll need to ensure that your new package has a valid composer.json
in it. You can create that by moving to your package's directory and running composer init
.
You can choose which of the prompts you want to follow to create this file, but the most important thing is to give this package a valid "name" that is in a namespace that you own on Packagist.
Here's what my "BestPackage" composer.json
might look like:
{
"name": "mattstauffer/best-package",
"description": "The best package!",
"type": "library",
"require-dev": {
"tightenco/tlint": "^4.0"
},
"license": "MIT",
"authors": [
{
"name": "Matt Stauffer",
"email": "matt@tighten.co"
}
],
"require": {}
}
Next, back in the original project's composer.json
, create a repositories
key, which will contain an array of objects, and add your first object:
{
// require, require-dev, etc.
"repositories": [
{
"type": "path",
"url": "../BestPackage"
}
]
}
Finally, you can require that package!
composer require mattstauffer/best-package
The downside of this method is that your package won't deploy along with your repo, so you'll have to set the package up parallel to your app on other servers as well.
But this is a great option for preparing for real package distribution.
Do you want to dig deeper into PHP Package Development? My friend Marcel has built an entire course around it entitled PHP Package Development.
]]>The Stream Deck is a class of hardware device created by Elgato. There are three sizes, and each is essentially a mini keyboard where each key can be customized to show any icon you want and take any of a series of actions when pressed.
The actions you can take are primarily targeted at streamers, so many integrations work with streaming tools like OBS and control mics and screens. But there are also some other broadly useful integrations, like media controls, social media push, and more.
The simplest and most common uses for Stream Decks are toggling mics, switching scenes in OBS, playing sounds, and triggering actions in other tools like Twitter.
If you want to get a bit fancier with Stream Deck, though, you can also organize your actions into "profiles" (sort of the different pages of apps in the iOS home screen), or create "multi-actions", which trigger more than one action when you press a single key.
There are three sizes of Stream Deck, but the standard 16-key is the only one you can find easily on Amazon. Here are links to all three, if you're interested:
I personally have the standard and I think it's a fantastic size. There are enough options that I don't have to swap between folders during any given scenario, but it's not massive like the XL.
In order to customize your Stream Deck, you'll install an app on your computer that lets you create profiles and assign actions to keys.
Here's what that tool's user interface looks like:
You can see the right column is a list of possible actions, the top of the left column is either a grid of keys (if I'm editing a profile) or a column of actions (if I'm editing a multi-action button), and the bottom of the left column allows me to customize the current action or multi-action I have selected.
There are a few important concepts when it comes to Stream Decks:
There aren't many global settings of any significance on the Stream Deck, so I'm more interested in sharing my profiles and multi-actions with you.
As you can see, I'm using a combination of profiles (which I use sort of like scenes—one for day to day, one to prep for a stream, one during solo streams, one during guest streams) and multi-actions (to combine multiple steps that always happen at the same time together).
I'm using actions to open and close applications, trigger keyboard shortcuts, send tweets, toggle my lights and change their brightness and color, switch profiles, and much more.
Day to day actions: The actions I might use when I'm not streaming: controlling lights and music.
Pre-streaming actions: The actions I take as I prepare for my streaming session. I work through these multi-actions one by one as the stream gets closer until I finally hit the guest or solo start buttons.
30 minutes before stream multi-action: This is the multi-action that I press as I'm readying my computer for streaming. I still have to take a few steps manually, but this handles some of the most repetitive aspects of my prep process.
Start stream multi-action: This button actually starts the stream. I hit this about ten minutes before the stream is scheduled to start, so the folks who get notifications when a stream starts can get the notice, and so I can take a few minutes to make sure my stream is set up correctly. When there are only five minutes remaining, I hit the "5 min countdown" button to start the timer.
Let's go solo multi-action: This, and its guest alternative, I will press the moment my start time arrives. Pressing this means the stream has really started, and I am now on camera.
Solo streaming folder: This, and its guest alternative, are the profiles I stay on during the stream. I can switch scenes, censor the primary screen, and toggle my mic and lights.
I've used only the default icon sets in Stream Deck, but I have a goal to eventually get more customized icons. You can download existing icon sets into Stream Deck or make your own.
You can learn more by taking a look at Elgato's intro video on creating your own custom key icons.
That's it! The Stream Deck is a really fun tool, and it makes it a lot easier to standardize my streaming process. The form factor is perfect, being able to customize the display of the keys is a delight, and it's just an all-around great product.
Have any questions? I'm @stauffermatt on Twitter!
]]>The most common request I received after sharing that post was: "How do I get started live streaming?"
I've been streaming on Twitch (twitch.tv/mattstauffer) for a few years and recently I've started streaming to YouTube (youtube.com/mattstauffer). I'm still definitely no professional streamer, but I'd love to share what I've learned with you so far.
Here's what we're going to cover:
Just so you know, I'm going to be describing a few generic options, and then going deep into my particular setup. It's not the only way—it's just the way I've chosen, for now. Also, I work on a Mac, so I only have experience with Mac software.
If you want to get started streaming as fast as possible, here are your quick steps.
OK, so you want to do it a bit better than that?
Good? Great. Let's dive into the details.
When it comes to streaming, there are definitely a few concepts that might be a bit new, so let's cover the basics.
First, streaming means sending some input source (or sources) from your machine to the Internet for people to watch live. Many streaming services allow for playback after the stream is over, but the primary core is just that you're sending something to the server and it can be viewed at the same time (minus transmission and encoding/decoding delays).
That "something" you send will usually be a video capture of your desktop, some audio from a microphone, and often video from a webcam and the audio captured from your computer. But it can also include output from specific web sites, audio and video capture from other apps (like Skype), text-based rendering, graphics, animations, and more.
Most streaming services spin up a chat room for each stream, so the folks watching can chat about the stream (or anything else). Some streamers choose to engage with the chat live on the stream; others choose not to.
You'll stream using specialized software that's created solely for the purpose of recording and streaming video.
Here's a frame grabbed from one of my recent live streams:
You can see that, during the majority of the stream, viewers see me, my screen, and the chat. They also hear my microphone and any audio that's generated by my computer.
The software we use for streaming has a few key features: capturing and organizing input from multiple sources, combining that input into "scenes", and sending the output to a remote server (and, optionally, saving it locally to your machine).
There are a few primary apps folks use to stream.
Note: Right before I published this article, the beta of Streamlabs OBS for Mac was released. Everything I say about OBS in this article will still apply; if I try Streamlabs OBS and like it, I'll update this post to mention it. From what I've heard, the beta is currently a bit too flaky to recommend you to start with it.
Each of these tools capture input, allow you to organize and combine the input, and then encode the resulting video and audio. When you set them up, you'll define the service (Twitch, YouTube, etc.) and then provide your API key for that service, and that's it!
I originally used OBS mainly because my friend Chris Pitt was the first person who got me streaming, and he used OBS. Since then I've heard good things about Streamlabs OBS, but even if it is pretty nice, it's still OBS under the hood. So, let's learn some OBS.
Here's what my basic OBS setup looks like:
You'll see a preview of the stream in the top; panels below for your scenes, sources, audio sources, and some controls.
Scenes are the tool you'll use in OBS to organize and lay out the content you're sending in your stream. As you can see, I have scenes laid out for each of the main scenarios I might be in any time I'm streaming.
Sources are audio or visual inputs. The audio inputs are usually microphones, sound capture from an app or web site, or pre-recorded music. Visual inputs will be usually be screen captures of an entire screen or a single window, an embedded web site, or a pre-recorded video or pre-made graphic.
In the scenes panel of that image above, you can see how I've organized my scenes.
The simplest scenes are a single video source and a single audio source. One webcam, one audio source. In the middle of the picture you can see a column labeled "Sources"; this is where you determine which sources will be a part of the scene you have currently selected.
But when you start building more complicated scenes—for example, if you're following the common pattern of layering graphics on top of your screen capture, and putting your webcam in a hole in the graphics—you may find yourself wanting to compose the scenes using different groups of sources.
Thankfully, it is possible to compose your scenes! You can embed a scene in another scene. As you can see, I have added prefixes to all of my scenes. Those which start with "scene" are for selecting during a live stream. Those which start with "group" are for using to compose the above scenes (and "GUESTGROUP" are misnamed, because they're really guest scenes, but I only just noticed that).
I'm going to show you some of my scenes quickly, but if you want to skip over them, tap here to move to the next section.
Here are some of my scenes that are true scenes:
And here are my "group" scenes. Remember, I'm using these as pieces to compose the scenes above.
Solo overlay: The solo overlay is the entire set of graphics that goes over and around my screen share. That's the chat, my webcam, my logo graphics, and StreamLabs.
StreamLabs: StreamLabs lets me create a widget in their app that will pop up any time someone subscribes to me (or takes other actions). They then render a transparent web page (which shows the widget when appropriate), and I overlay that web page over part of my screen, so the animations will show there whenever I get a new follower.
Chat: Restream consolidates the chats from all of my streaming services into one chat, and provides a web URL that renders it live. I shove that poor chat stream into a tiny little compressed box.
I won't claim I have the perfect settings, but I know that getting settings right can be tough at the start. Here are some of the key settings I have—to be honest, I don't remember which of these are the defaults and which I've customized.
If you have trouble with these settings, OBS has an Auto-Configuration Wizard which might also help you out.
You're likely going to want to send a stream to your audience that's smaller than the resolution of your screen (especially if it's Retina). One common way to do this is to change your screen's resolution before you stream, and that's what I did for a long time.
But lately I've instead set OBS to only capture a smaller portion of my screen. This way my audience doesn't see my menu bar or my dock, I can move important windows like the chat onto my primary monitor (but outside of the screen capture window), and I don't have to deal with resizing my screen all the time.
I learned this trick from an Noopkat, a popular streamer, in her article Lessons From One Year of Streaming on Twitch.
Here's what my actual screen looks like, with the red border (added after the screenshot) showing where OBS is actually capturing.
You can see a longer writeup on my webcam, lights, and mic—and entry-level options for each—in my recent post, Setting up your webcam, lights, and audio for remote work, podcasting, videos, and streaming, but here's a short list of my actual relevant equipment:
I mentioned this in the article linked above, but not all of this is necessary! You can absolutely get by with this setup:
You don't need a fancy keyboard, fancy lights, a DSLR, fancy speakers, a Stream Deck, or anything else to stream. I mean, hell, if you're just getting started, do your first stream with whatever you have laying around. Just stream!
I love all of this hardware, but there's one that's especially targeted at streamers: the Elgato Stream Deck.
I actually wrote an article all about my Stream Deck setup and released it at the same time as this post, so go check it out if you're interested!
There are a host of software and online services you can integrate into your stream experience. Here are the categories I'm aware of:
OBS itself makes it easy to capture various apps and web sites and parts of your screen, but you also might want to capture and process audio before it even hits OBS as a source. My favorite apps for this process are made by Rogue Amoeba, and one of their tools is key for streamers: Loopback.
Loopback is an audio routing tool that allows you to create "virtual devices" from other sources that look, to your computer and your apps (including OBS) like actual audio inputs.
You can see the simplest usage of Loopback in the image above: combining multiple audio sources (including one, Chrome, which normally would be hard to capture) and sending them to a single virtual devicee.
Anothr way I've used Loopback is to create a new virtual device called "Split left channel" and then make a new virtual input that maps the left channel of that input to both channels of the virtual device.
StreamLabs OBS offers limited Multistream: you can stream to Twitch + Facebook or YouTube + Facebook, but not Twitch + YouTube. And, sadly, no other broadcasting apps I know of offer multistream.
That means if you want to stream to more than one service at once, you'll need to use an aggregator. I use and recommend Restream, which can stream to dozens of services at once, and also offers an aggregated chat app so you can interact with all of our communities at once.
Restream's free tier is great, and has been more than enough for me. I found a great getting started with Restream tutorial.
There are a few services that collect together a suite of tools targeting streamers. The main two I know of are StreamLabs (which I use) and StreamElements. You can dig into their sites to learn about the insane number of tools they offer (rolling credits at the end, tip jars, sponsor banners, etc.), but here are the ones I use:
Just like the chat bots of yore in IRC and Slackbot, you can program your own bot that will hang out in your chats and respond to timers, commands, and outside input to take actions in the chat and elsewhere.
My bot (powered by Streamlabs' Cloudbot) shares a message any time I get a new follower, offers commands like !editor
to learn about my theme and text editor, and provides some mod tools, but it could do way more if I had the time and energy to configure it.
Overlays are basically widgets that you put on top of your stream in your streaming software. There are two main types of overlays: Persistent and popup.
Persistent overlays will occupy a small section of your screen and often show data like number of followers, most recent chat message, and other data you might want to show that's best provided by a web-based service.
If you want to try this out, check out Streamlabs' tool Stream Labels.
Popup overlays will only show up when an action happens—for example, a little GIF of Shaq wiggling shows up on my screen every time I get a new follower.
The way these overlays work is that you get a URL from your overlay provider (something like streamlabs.com/widgets/3409h104123
). You'll create a source in your streaming software that renders the content of that URL, and size and position it wherever you want the overlay to show up.
Popup overlays are the most interesting, because you're really just placing a transparent box on your screen. The web page you're rendering (from the overlay provider) stays transparent until a popup needs to be rendered, so you never even know it's there until an alert pops up.
If you want to try this out, check out Streamlabs' Alert Boxes and their other widgets.
If you're streaming to Twitch, your videos will be available for replay for a time; the exact time differs depending on whether or not you're a Twitch partner. You can also export your videos to YouTube after streaming to Twitch (although you'll have to wait 24 hours after streaming if you're a Twitch partner).
If you're streaming to YouTube, it'll automatically save every video you stream to your account. Here's my process every time I finish streaming:
I also create a custom thumbnail for every video in Photoshop.
I may add to this section over time, but here are a few tips I can think of off the top of my head.
I cannot recommend more highly the practice of picking a time that you're going to stream and sticking to it. That doesn't mean you can't occasionally change it, or even stream at other times, but one of the best ways to get a consistent audience is to let them know when to look forward to your stream.
One thing that helps is to let everyone know in a lot of places and a lot of times that this is the case. I've added my time (11am Eastern on Fridays) to my web site, my Twitter bio, and probably other places, and I'll tweet a reminder the night before and an hour before most weeks too.
I used to show up to each stream entirely unsure of what I was going to work on. I've since discovered that my better streams happen when I've thought ahead about what I'm going to work on. I can structure what I think an hour of content will look like, I can do any boring prep work needed, and I can tweet the topic out when I'm reminding folks the day before.
It's also important to give 10-15 minutes buffer before and after each stream on your calendar. Don't pack it in tight; you'll regret it.
Decide whether or not you're going to be a chatter.
I could spend so much more time coding if I didn't pay attention to the chat on my streams.
But I'd also have way less of an enjoyable connection with the folks who take time out of their day to hang out on a stream with me. I love especially when I get to know the regulars over time. And you might be surprised at how much help I get from the folks on the chat. They're full of wisdom and incredible resources, and really contribute to every project I work on.
After that, it's up to you! Nerd out! And if you have any questions or suggestions, hit me up: I'm @stauffermatt on Twitter!
]]>Today, worrying about those things feels pretty luxurious. In light of the number of companies moving (temporarily?) to work-from-home due to COVID-19, I sent out this tweet this weekend:
So, @dsheetz & I have been running a fully remote development shop (@tightenco) since 2011.
— Matt Stauffer (@stauffermatt) March 15, 2020
If you’re suddenly remote and you have any questions for us, hit me up!
I’ll probably be tweeting out some blog posts, YouTube videos, & podcast episodes we’ve made about working remote.
I've already received quite a few messages that go basically like this:
My company is suddently remote and we've never done this before. Help!
Like I wrote in the tweet, we've been remote for almost a decade, and tried almost every tool and trick you can imagine. There are so many aspects of this to cover. Here are a few places I've talked about remote work in the past (check the time stamps—some of them are a few years old)
But what I want to talk about today is the other side of my previous blog post. That post was about how I've spent years working on getting my not-at-home remote office just the way I want it.
Today, I want to talk about remote work—especially right now, as so many people are unexpectedly being told/allowed to work from home—and how so much of it happens in less-than-ideal environments, and what we can do to make the best of it. I'll assume you're working from home, but many of these tips apply in other less-than-ideal remote work environments as well.
If I'm working remotely, I want these things:
I've got most of these things in my normal day-to-day remote work. I pay for an office in a coworking space that's a few minutes from my son's school, and during the day my kids are at school or with their mom.
I've got a great tech setup, a stocked refrigerator and great restaurants nearby, my room is clean and isolated, and there are other folks around when I want to see them.
However, at least for the next few weeks, I, and millions of others, will be working from a place that likely hasn't been set up to perfection. Me? I'm working from our spare bedroom—also known as a room with no desk—bad light, and a lot of junk. Plus, it's just a dozen feet away from where my kids are playing all day. I need to get my stuff together, and you may too, so let's talk about it.
Note: I'm going to do my best to give this advice assuming you don't have kids, and then talk about kids at the bottom.
The most important thing that disappears when you start working from home is structure. There are a lot of structures we get from going into an office: time structures, physical structures, even management structures. These structures' sudden disappearance don't turn us into freeloaders who watch Netflix all day while getting paid for it, but it does add stress and uncertainty that can weigh on us.
Schedules are our best tool to create structure. I use my calendar to plan out my entire day, both during the work hours (write for an hour, pair program for an hour, meeting for an hour, etc.) and also outside of the work hours.
Folks at Tighten, the consultancy I run, who always work from home told me to be sure to mention the daily routines that start and end our days. Wake up, run, take a shower, eat breakfast, get dressed, start work. Close laptop, turn off the lights, take a walk, start dinner prep. Whatever works for you, make a plan.
The consistency, regularity, and predictability will bring much of that structure you miss. And managing those transitions can often be the most important thing for controlling your stress in a less-than-ideal environment—this is what helps set those boundaries between "home" and "work".
Another great tool for creating boundaries between your work and personal life is to try to make a dedicated space for your work.
Obviously it'd be great if you had a home office, but if not, you may be able to carve out a space that's entirely dedicated to work. This might be one end of the kitchen table, a certain corner of your bedroom, a desk in the living room, the garage, or anything else.
Make that, for now, your work space. Your computer lives there, your work gets done there, and most importantly, when you put yourself in that place you're "at work" and when you leave that place you're not "at work" anymore.
You might be surprised, but getting dressed specifically for work has really powerful effects.
First, you'll feel more mentally put-together when you're not in your pajamas.
Second, this is another boundary you're building between home and work, sort of like Mister Rogers changing his shoes when he gets home.
And third, you'll be much less averse to video calls—I'll talk about their importance later—if you're looking professional.
But here's another pro tip: keep a nice shirt and a hat nearby. That way, if you have to jump on a video call and you are wearing that one t-shirt you're a bit embarrassed by, or you haven't had time to do your hair today, you can just throw those things on and be ready for a call.
If you've never worked with them before, a "pomodoro" is basically a period of work (often 25 or 50 minutes) followed by a period of rest (often 5 or 10 minutes). This is a way to build little micro-structures into your day, which can be especially helpful if you're used to a day that's not just sitting in front of the same computer at the same desk for eight hours.
Note: The actual Pomodoro technique is a bit more complicated than this recommendation, but when folks talk about pomodoro, they often just mean "period of work followed by a period of rest with a timer helping you remember".
My last post was about my perfect setup I have at my office, which is pretty useless right now when I'm sitting at home. I have no desk, a Macbook Pro, and some headphones. Not the same. What can we do to improve our working-at-home equipment situation?
This is an obvious one, but I'll just throw it in here. Noise canceling headphones are so key when there are neighbors, kids, roommates, spouses, pets, or whatever else constantly vying for your attention.
"Less-than-ideal" work situations almost always have some element of distraction, and noise-canceling headphones and some chill background music can make a huge difference.
I personally saved up for Bose QC-35IIs a few years back, but those are $350 headphones. However, a few folks—including the Wirecutter—recommend the $60 Anker Soundcore Life Q20s.
Using a separate keyboard provides a lot of benefits when you're not working from your perfectly-crafted office. Separate keyboards are likely to be more ergonomic than your laptop keyboard, they give you more flexibility to put your laptop in the right spot for good posture, and they help with video angles (more on that later.)
If you're on a Mac, the Microsoft Sculpt Keyboard is incredible, and if you're on Windows, the Microsoft Surface Keyboard is even better.
If you can sit in a real chair, do it. One common component of less-than-ideal work environments is the lack of a desk. I know it might seem fun to work from the couch, but your back will make you feel it after a few days. If you're working from a laptop, you can even possibly put your computer on a dresser and work standing up for segments of the day.
This may seem crazy, because I think we've all see that news anchor working from home whose adorable children bust in on him, but when you're working remotely, video calls—not just for group meetings but also for one-on-one conversations—give an opportunity for human connection and communication that is hard to get with audio or text.
This is a bit controversial, but I believe that every meeting I ever have with anyone should be video. Even in your less-than-ideal work situation, which often translates to less-than-ideal video environment, I encourage you to consider it.
Quick webcam tip! When your webcam is mounted on the top of a laptop that's on your lap, you're going to get both a bad angle—your chin smushed down and a bit of a view of the underside of your nose—and bad lighting. Overhead lighting is bad for us anyway, but your face will look especially dark if you're leaning over a bit to look into your webcam.
If possible, put your laptop on a desk or a stand and use a separate keyboard. This will both be better for your back and it'll get you a better angle on your webcam.
Overhead light is not flattering. Most rooms have overhead light. However, it's good when working from home to have plenty of light just for your emotional health, and most lights you could add to your room will not be at ceiling height.
Kill two birds with one stone: get some freestanding or desk lamps and put them near you. You'll have a brighter work space and get better light for video.
When you're on a video call, you're exposing your less-than-ideal work environment to the world, right? That messy bed that's been annoying you all day and making it hard to focus is now also in frame for all of your coworkers to see.
Some big things to watch out for:
I've seen folks put up privacy screens behind them to block out the view, and I think that's a pretty advanced tip. For now, I just work with a wall behind me. It's not pretty, but there are no naked people or dirty underwear on my wall, so I'll call it a win.
I won't say that working from home is necessarily worse for your health than working at an office. There are some huge benefits, including access to family and comfort and removing a commute and removing interactions with potentially plain-old-cold-or-flu-sick coworkers.
However, there are a lot of distractions and possibly negative influences at home, and a lot of our healthier habits like healthy food at work and going to the gym might be disrupted if we're working from home, especially during this particular moment.
I wrote this above with regard to equipment, but I just want to say it again. Good posture is key. Your back is going to kill you if you work hunched over all day for weeks.
Fresh air and sunlight and moderate physical exercise are three of the most important factors for our physical and mental and emotional health, and both fresh air and sunlight help kill germs and viruses.
Our minds and spirits respond better to natural light. Try to work in a place where there's as much natural light as possible. If you suffer from Seasonal Affective Disorder, this may be the time to consider getting that artificial sunlight machine if you don't have a window near you.
One way working from home can seem very ideal is access to the kitchen. Finally, you think, I can cook my own home-made meals! This is true, and often it can lead to much healthier eating.
However, easy access to the kitchen also can increase snacking, which, at a time you might suddenly be going to the gym less, is not ideal.
Consider scheduling your trips to the kitchen. Once for a morning snack, once for lunch, and once for an afternoon snack. Make sure you have snacks available that are healthy, and if not, train yourself to get water instead.
You may be tempted to replace your commute with more entertainment time, or more work time. Consider instead setting aside time for you and your body. Meditate, or run, or walk, or whatever it is that gives you peace. Especially if you can do that at the start of the day, it could be a huge help for your less-than-usual days going more smoothly.
If you're going to therapy, and you're reading this article because of COVID-19, you may be unable to visit your therapist physically. Contact your therapist now and see if they offer telehealth (e.g. Skype calls).
Making drastic changes to your work context can have a big impact on your mental health, especially if it's during a national crisis. Stress, anxiety, depression, relational conflicts—they're all going to come to a head.
Many people have joked about how many new babies will come out of this time, but I also think there will be new fights, new divorces, new anxiety attacks, and much more. Take care of yourself, however you do that.
Our friendships and family relationship are both key to our mental health when things aren't going well, and also a possible source for things to be very tough when we're all of a sudden crammed together in the same space.
Working from home, especially when you didn't chose it, can often feel very isolating. It can be helpful to intentionally keep rhythms and connections in relationships that you have in normal life.
Do you always catch up with one friend at lunch at the office? See if you can chat over Skype or Hangouts or whatever some lunch times. Always get together with your best friends at the bar every Tuesday night? Do the same thing, but over Hangouts. Each of you has their own separate drink, but you're still together and still connecting.
I just happened to stumble across an article Wirecutter wrote about How to be Social While Social Distancing, and it's also got some great tips there.
While your work and friend relationships may decrease, some other relationships will have increased access when you work from home, and this isn't always good for the relationship. Especially if it's during a time of stress, and especially if you have a smaller house, you're going to start feeling the stress of your interactions with the folks you love.
One of the best tricks we've come up with as from-home workers is to be very clear about our boundaries (but still gracious when they're broken). Many folks will make a "at work, do not disturb" sign they can hang on a door, or create a "if the door is closed, I'm working" policy. Others may choose hours: "Between 9-12 and 1-5 I'm in work mode".
The best way to avoid conflict is to express your unspoken expectations. Do you expect not to be interrupted when you're working? Say it, kindly. Remind it, kindly, if it happens.
Speaking of grace... grace is a key component of working from less-than-ideal situations.
If this situation is less than ideal, it's likely either new, or cramped, or something else that makes it less than ideal not just for you but also for the people around you. When your housemate unthinkingly plays loud music or your spouse asks for your help with something or your kid asks you to look at their painting, they're not trying to mess up your work flow. Give them grace.
Of course, giving someone grace is not the same as a free pass. You can still be direct and clear while being gracious and loving.
Most importantly, recognize that you're not going to be able to produce quite the same amount or quality of work when you're first entering a less-than-ideal remote working scenario. You'll get there. But it takes time.
Don't get mad at yourself for being distracted. You're going to get distracted. Your kids or housemates or spouse or cat merit your love and attention and sometimes those things don't happen in the timing you wanted. You're a human being and you'll respond as a human.
Give your systems and structures space for grace. Don't build something so rigid that, if you decide to take the cat for a walk at 2pm, the rest of your day falls apart. That's not a failure! That's a healthy and normal part of human life, and one which any structure needs to allow.
Take an attitude toward yourself that you're doing the best you can with what you have and you'll get a little bit better every day. That's all anyone can ask of you.
So, I'm going to rely on other folks a bit here. I do have two children. However, my wife's career field is mainly on hold because of COVID-19, so she's primarily taking care of the kids while I work. So, unlike families where both spouses work, or single parents, I have it much easier. My main concern with the kids is not having to be fully responsible for their safety and feeding and education, but just enjoying them without being too distracted by them.
However, if you have kids—especially young kids—and you're responsible for them during this time and have to do a job, you're in the hardest of non-ideal work settings. Here are some ideas I've had, along with a few from friends.
Kids thrive on structure, despite how much they say they don't like it. Make a daily schedule for their education, food, play, screen time, and whatever else. Keep it flexible, keep it gracious, but make it clear so they can know what to look forward to.
This will also help you, as it'll become easier to know when to schedule meetings, and easier to let them have some screen time or play time without feeling like a terrible parent.
Pro tip: If your kids are old enough, involve them in creating the schedule!
Sure, you read all those articles about how kids are going to melt their brains on screens. Screw that shit.
Your kids are not the only people in your house who matter. Your work, your time, and your sanity are also important. Use the tools you have available to you—which include educational screens and even non-educational screens—to make the best of the situation you're in. End of story.
All of us have some busy work, and I've found that I can work through the busy work while my kids play. That means if I want them to play outside (my kids are young and we don't have a yard, so I have to supervise them when they play outside), I could take them outside, sit on my laptop, and work through emails while watching them play soccer out of the corner of my eye.
Life doesn't always have to be binaries. Working or home. I do think boundaries are healthy, but sometimes, especially with kids, you gotta do what you gotta do.
If your less-than-ideal work environment is your home, there are likely other tasks that are occupying your day as well: laundry, cooking, cleaning. These tasks often require us to put the kids in front of screens for yet another half hour, but there's an incredible alternative: involve your kids in the housework.
Get your kids to put away the laundry. Have them stir the sauce. Teach them how to clean windows and dust. They're both learning valuable skills, they're shaking off some entitlement to being entertained, and, if they're old enough, they might even help you out!
That's all I've got for now. I'll be happy to update this post as more suggestions come in.
I'm pretty active on Twitter as @stauffermatt, so please feel free to ask me questions there, and I also have a YouTube channel and one thing folks do there is ask me questions that I answer in short video format, so if you shoot me a question on Twitter or in a YouTube comment, I might get a chance to address it there.
Thanks so much for reading. Remember: grace to you. Grace to your family. We're going to get through this.
]]>I've spent quite a bit of time obsessing over lights and camera, and I wanted to help you—new streamer, podcaster, new remote worker, or someone trying to level up their setup—see a few different types of option for your remote work or streaming setup.
Note: I'm on a Mac, so most things will be biased in that direction. Other note: these are all affiliate links. Please feel free to bypass those if they make you uncomfortable!
First, I'll cover each section, starting from the cheapest options for each:
Then, I'll tell you my setup, and a few suggested full setups at various price points.
Only you really know what level of clarity you want from each piece of your setup. Are you happy with what you have? Please, dear Lord, don't spend any money. This is intended to be a resource if you want more and don't know how to do it, not a stress or a judgment to anyone happy with their current setup.
And while it's a lot of fun to have a really high-quality webcam for my remote work, would I have bought it if I didn't have a more intense need for high quality video for my YouTube stuff? Hell no. Get what you need, in your budget. This is just a resource.
Podcasters need much nicer mics. Streamers need decent mics and cameras, but lighting probably matters the most. YouTubers need the best cameras and light, but audio still matters a lot. Remote workers have the least strict requirements. Do what works for you.
Let's start with the simplest option. Your computer likely has a built-in webcam. It's also probably awful.
As you can see, even on a Mac, the picture is low quality, and, especially in low-light situations like my room when I don't turn on my streaming lights, it's very flat and hard to see.
I've never used it but I've heard the Logitech C270 recommended as a minor upgrade to your built-in camera. This will certainly be an upgrade to the built-in camera, but whether it will be enough for you depends both on your needs and whether you'll have a sufficient light source; cheaper cameras are very dependent on having enough light.
Most folks at Tighten have chosen to upgrade to a Logitech webcam. There are a few options but most recommendations will be something in and around the 900 series. Mine is the C930e, but Wirecutter now recommends the c920s, which is cheaper and adds a privacy shield. The c930e also has a wider field of view—great if you have a big room to capture but unnecessary if you're reading this article.
As you can see, these 1080p Logitech cameras have higher resolution, better light sensing, and (with the c930e, at least) a broader view into my room (if you want that—Logitech's drivers also allow you to zoom).
While I was writing this post I was linked to a great post on webcam lighting and best practices in which the author, Olivier Lacan, recommends a Razer Kiyo webcam, which has a built-in ring light.
Here's Olivier's side-by-side comparison of the Logitech C920 (left) vs the Razer Kiyo (right):
He also gives some great tips there about zooming, webcam settings, and the natural lighting in your room.
If you're ready to move it up to the next level, especially if you plan to stream and especially if you plan to record videos for YouTube, it's time to look at connecting an actual camera to your computer.
Elgato makes a device called the Camlink, which allows you to use any device that outputs HDMI as a webcam, meaning you can now grab any video-and-HDMI-enabled handheld camera and use it as a webcam.
Elgato has a list of cameras you can use for this function: they all output HDMI and can be rigged to plug into the wall instead of drawing their power from a battery.
Sony cameras are the most popular; you'll almost definitely find someone out there recommending the A6000. You can find a Sony A6000 for around $550 on Amazon, but if you're willing to go used you can get it for a few hundred dollars cheaper—some times as low as $300.
Because I also record videos for YouTube, I wanted one with 4k video, which means I had to spend a bit more. I bought the A6300, which is almost exactly the same as the a6000 but it supports 4k video. An a6300 new on Amazon costs $1000, so I don't think that's going to be reasonable for most folks, but I was able to get mine used on eBay for $550. Again, if you're not planning on shooting full frame 4k videos, go for something more like the a6000, and try to get it on eBay.
If you're going for a Sony camera, you'll also need to get a power adapter that allows you to plug a power cord into the battery compartment so it runs off A/C power instead of a battery.
Note: If you already own a DSLR camera, check out this video to see if you can use it as your webcam for free.
I started with the camera, because it's not only important broadly, but most of what I thought were lighting issues were actually solved when I upgraded my camera.
That said, lights can still make a huge difference—especially if your camera isn't the highest quality.
The best option to start with is to get the best possible light using normal lamps. Buy floor stand lights and point them at the walls or toward you, so that as little of your light as possible comes from overheads, especially if the overheads are fluorescent.
Unfortunately, it'll probably take a lot of lamps to light you this way, so you'll probably also need to get at least one desk lamp.
This is my "cheap" setup: overhead LEDs that I can point at the walls to get some bounce light.
I've never used this, but Scott Hanselman recommends a $20 ring light that works great if you have a Logitech webcam.
While it says it's for Logitech cameras, I'm pretty sure that when he upgraded to a fancier camera, he kept using that same light.
If you'd like to make a more complex setup yourself, you can get a few cheap clamp lights (with any kind of bulb in it—pick the right color temperature for you!), then build your own diffuser. If you want to step this one up just a bit, you could put Hue or LIFX bulbs into this so you can control the color temperature and brightness.
If you use this setup, you'll notice the light is much too harsh to point directly at your face. First, consider a diffuser of some sort (parchment paper and binder clips is your cheap option, or you can go for a diffuser sock or something similar). But additionally, consider not aiming the lights directly at your face, but instead bouncing them off a wall or some other nearby flat plane.
There's a growing market of pro-sumer LED panels. Be careful, because the cheapest LED panels you'll find on Amazon are garbage, and they'll fall apart fast.
Neewer is a brand that provides probably the lowest quality option I'd still recommend you considering; it's definitely for consumers but my experience with their stuff is it's been good enough to use in a non-professional setting without worrying about it always falling apart.
I haven't used this particular set, but this LED kit comes with stands and can be dimmed and adjusted in terms of color balance.
Twitter user @Marktechson reached out after I posted this and shared his setup, which is around $80/light:
After years of trying every DIY option I could come up with, I ended up splurging on first one, then 6 months later a second, Elgato Key Light. If you can afford them these are an absolute dream: flat panel diffused LED, mounted on a powerful and simple stand, with both brightness and color controllable via computer and attached hardware devices like the Elgato Stream Deck. This is definitely the best option if price isn't an issue.
Note: After I wrote this post, I discovered Elgato had raised the price of these lights from $150 to $200, and introduced a new light, the Key Light Air, for $130. I've never used them, but I'd recommend them over the Key Light, especially for anyone considering two lights. They have half the illumination power, but I never have my lights up full blast anyway.
Here's me with those lights, shaded blue:
And the same, now shaded orange:
The same, with a well-balanced color profile:
If you really want to nerd out, you can play around with your background. I added some LIFX Z Lights behind my couch and a LIFX color bulb, both Black Friday steals, in my lamp:
OK. We've got you looking good; what about audio?
The simplest answer is that you should do anything in your power to get a standalone mic. I don't, unfortunately, have very many examples sitting around, because I eventually saved up for my dream mic and got rid of the rest. But here are a few options.
We'll start with our onboard mic, then headsets, and finally standalone USB mics and then the top of the pack, standalone XLR mics with USB audio interfaces.
Your onboard or webcam mics will sound like garbage. Echoey, somehow so bad that they defeat software noise cancellation (to the point where the other person will hear themselves back)... this is not it.
Airpods compress your audio quite a bit and run out of batteries fast. They're convenient but not a great solution for anything other than occasional calls.
Don't buy Airpods for this purpose. But, if you've already got them, there's a good chance they've at least got better audio than your onboard mic. Not by much. But a bit.
If you have a wired boom mic headset, or even a re-chargable wireless sort, you're likely to get all-day battery with better-than-Airpods and better-than-your-computer-or-webcam-onboard-mic sound quality. This is a great option if you're only remote, not streaming, and you don't care too much about audio quality. And plenty of streamers are even happy with this option, so don't knock it.
There are a ton of great options here, but here are a few I've had recommended lately:
If you want a step up in sound quality, and/or don't feel like wearing a headset, you'll want a standalone mic. Let's start with the cheapest and easiest option: USB mics.
My first piece of advice:
Don't. Buy. A. Blue. Snowball. Or. A. Blue. Yeti.
Why not? The Snowball isn't worth the cost, and the Yeti is a great mic to pick up every damn noise in an entire room, but a terrible mic to isolate a single person speaking. If you're a remote worker, I guess that's fine, but if you don't care that much about sound quality, why not try the AmazonBasics option below, cheaper and with better noise isolation?
Note: I'm being a bit extreme because this is such a popular mic and it burns so many people by how much background noise it picks up. If you have one, enjoy it. But if you don't... if you're reading this, you're almost definitely not its actual best market fit. That's not to say it's not a good mic (although, honestly, it's not even worth its cost relative to other Large Diaphragm Condensers, in my opinion), but just that its noisiness is the number one pain point I've seen for new podcasters and video creators with regard to audio.
(You can geek out on this topic by learning about condenser vs. dynamic mics—for now, focus on dynamic mics).
If you're getting a mic to record yourself (podcasting or videos, or maybe streaming as well), and you're willing to learn good mic technique, there's a pretty impressive mic that's affordable, dynamic (better noise rejection), and USB: the ATR-2100.
A few friends of mine record a podcast regularly, and it's two of them in the same small, concrete room, both using the ATR-2100. Take a listen to hear what it sounds like in definitely sub-optimal recording conditions.
However, dynamic mics, especially the ATR-2100, require good mic technique. So, if you want a casual desk mic (especially if you're a remote worker, not a YouTuber/podcaster), check out the AmazonBasics Desktop Mini Condenser. I was skeptical, because it's a condenser mic, which means it's likely to pick up a lot of background noise, but reviews online say it's a lot better in terms of filtering out background noise than the Yeti, and it's half the cost. You'll likely not get quite the same quality or background noise canceling, but it'll be a lot more forgiving. Do what works for you!
Rode has introduced a higher quality USB mic called the Rode Podcaster USB. It costs around $220, and if you don't want to go the whole way up to the cost of an audio interface and an XLR mic, this is a good bridge above the ATR-2100.
If you want to move up from there in quality, you're probably going to be getting into XLR mics. The (cost) downside of these mics is that you'll have to now add an audio interface into your setup.
I use the Onyx Blackjack, and many of my friends use the Scarlett 2i2, but you don't really need two-input interfaces like this; a single-input like the Scarlett Solo will do just fine.
These interfaces are doing several things: first, converting XLR to USB. Next, they'll likely have gain knobs for manually adjusting the input level from the mic, and headphone monitors, so you can hear what you sound like as you're recording. Finally, they'll likely have microphone preamps inside of them, which boost and often increase the sound quality of your signal.
When it comes to XLR mics, the workhorse of the audio industry is the SM58, and you could definitely do much worse. It's $100, and with this, an XLR cable, and an audio interface, you're pretty good to go.
There's an older mic, the Samson CL8, that I often hear recommended, so if you can find one used it's probably going to treat you well, but they're discontinued now.
Once you move up from there, you have a few frequently-recommended top-of-the-line studio mics. I'm partial to the Shure SM7b, but it's by far the most expensive option: you have to buy both the $400 mic and a $100 inline signal booster (because its output is quiet compared to other mics).
Another mic that's very popular for podcasters is the Heil PR-40. It's cheaper, at $330, and you don't need to buy the $100 booster with it.
If you want to geek out about microphones, mounts, mic technique, and even the quality of your power and cables, I just stumbled across this post from Olivier Lacan about microphones, and there's also Marco Arment's classic Podcasting Microphones Mega-Review.
Now that you have a mic, you might need some new headphones, and you might also need help getting your mic or your room set up for good recording.
Honestly? Get whatever works for you. Nothing wrong with you using the iPhone headphones you have at least a pair of.
If you really want to splurge, I love the Sennheiser HD280Pro's. For $100, you get a studio-quality headphone (and I mean that; I've recorded in one of the biggest studios in Chicago and that's the headphone we used) that's durable and ugly as hell.
But, truly. This is the last place to worry about spending money. Just make sure you can hear the other people and use your money elsewhere.
If you're buying a standalone mic, you'll likely need an XLR cable (if it's not USB), a stand, and, depending on the mic, a shock mount and/or a pop filter.
The most-often recommended desk stand for mics is the Rode PSA1. It's a fantastic boom arm... and it's also $100.
I don't yet have a really great, consistent recommendation for a cheaper competitor, but when I got started I used a tripod boom mic stand I had from my music playing:
I always just go on Amazon and pick what looks good. Got knowledge to share? Let me know on Twitter!
@theadamconrad reached out on Twitter and suggested Monoprice cables:
The shock mount and pop filter you use (and whether you need either) will depend entirely on the mic you pick. Check out the Marco Arment and Olivier Lacan mic articles to see their ideas about which mic needs which.
A note: if you can see the metal mesh of the mic you're considering, you're probably going to at least want a pop filter or a screen or something similar. Here's a cheap, OK, entry-level pop filter:
Someone (not me) could write three more blog posts on room treatment alone, but here are a few simple tricks.
First, your best option for noise isolation is to move your recording into a closet full of clothes, or record with a blanket over your head. Obviously this is a budget choice for podcasters, not a viable option for remote workers or YouTubers or streamers. But, it's free, and it works.
Second, you want to reduce the number of flat surfaces in your room that can bounce sound. Bring in rugs and furniture and hang stuff on the walls—especially if that stuff is fabric.
If you really want to spend some money on your room acoustics, ATS Acoustic panels are very large and very good.
OK, so we've covered a lot of ground. Let's look at a few example setups you might consider. Of course, you can mix and match however makes sense for you, but these are a few examples I've helped folks set up in the past.
I work remotely and I'm on video calls all day. I also run several podcasts (Five-Minute Geek Show and Laravel Podcast), create YouTube videos, and stream on Twitch and YouTube. I'm also doing much of this for work—my job isn't exactly developer relations but that's certainly a part of it. So, I care a lot about my setup, and I've been slowly investing in it for years.
For example, the Onyx Blackjack was a Christmas present years ago, for recording my bass playing. The lights I bought one at a time over the span of months to years, I can't remember. The mic I saved up for... for a long time. It takes time to get the right setup if you don't have an overflowing bank account.
For my remote work, I use an old Plantronics wireless headset that's since been discontinued more than I use my actual podcasting mic, and if I wasn't using my Sony camera for YouTube and streaming, I'd still be using my old Logitech C930e webcam (although, if I had to buy it today, I would try the Razer Kiyo).
If you're working on a machine that has no audio or video capability and need the cheapest possible option, get a Logitech C270 and use it for both video and audio.
If you're a remote worker and you just need a webcam and a headset to be on calls all day, I'd get the Logitech C615 and the Lifechat headset. These aren't my favorite choices, though; if you can skip up to the mid level setup I'd recommend it.
If you're an entry level podcaster, I'd go for either the AmazonBasics mic (if you know you have a decent room and no A/C unit or kids) or the ATR2100 (if you're willing to work on your mic technique.) This is a very acceptable setup. You honestly shouldn't need any more than this.
If you can get a bit more budget for your remote work setup, I'd go for the Razer Kiyo (caveat: I haven't tried it yet!) or the Logitech C920s. And then the best Jabra you can afford.
If you're talking pro level, I'd suggest you get the Scarlett Solo, a Shure SM7b, a Fethead booster, and the Rode PSA1 boom. For the slightly-cheaper version, get a Heil PR40 and drop the Fethead.
I've got pretty much my dream setup for streaming video, but what if you want to get started? Here's what I would get:
This should cut it for streaming. Honestly, you can do even less and still get by—the streaming isn't as much about you as it's about your content. But this is also high-enough quality to record 1080p video (at 30fps), with plenty of light, and to get very good audio if you're willing to learn good mic technique.
$90 for the Kiyo, $40 for some cans and light bulbs, and, if you're recording full-screen videos instead of streaming, maybe $80 for the ATR2100 and another $20 for a stand.
Wow. That was a lot. Got questions? Hit me up on Twitter. I'll hopefully add any new stuff I learn in here.
]]>Notes: After writing this article I remembered Scott Hanselman had written a great, similar post, so I added a few of his recommendations here, using his original referral link. Thanks Scott!
In it, he concludes that his first response to toxic people is going to be to ignore them. After all...
(Above quote screen-capped from Justin's article)
I share a lot of strong opinions online, so I meet my fair share of trolls.
I've often received, and shared, the same advice Justin ends his article with: Ignore the trolls.
There's wisdom behind this thinking. Most people, when made aware that they're making you feel bad, will stop. Trolls, on the other hand, have just received exactly what they wanted. So, how do you make them go away? Mute em. (Don't block--but that's another story.)
Justin is right here, as supported by the quote I ripped above. Our wellbeing requires a healthy distance from toxic people, and the first step is to learn how to ignore a troll when that's what you need.
So, if I agree with Justin, why am I even writing this post?
Because I think we should start there... but not end there.
There's one big problem if all we do is ignore the trolls:
None of us ignores trolls in a vacuum.
What do I mean by this? I mean that each troll that bothers you is A) doing so in a way that is seen by others and B) not bothering only you.
When we ignore the trolls, we are prioritizing our own mental health over the imagined "justice" of battling against some anonymous asshole. This is wise, and good for our sanity!
However, I want to propose that there are times and people--not all times, and not all people--when and for whom it makes more sense to battle some of the trolls, not just ignore them.
Why? Because sometimes fighting a troll sends a message to everyone else. Sometimes it sets a standard of what is and isn't acceptable behavior. Sometimes it speaks the truth when the troll has been speaking untruth. Sometimes it gives others language for what they know but can't express to be true.
Some times we fight a troll not to defend ourselves but to tell others "you're not alone", or "you're not crazy". Some times we fight a troll not to convince them they're wrong but to ensure that truth is spoken and that others who can sense truth have a little less gaslighting in their lives that day.
My wife often characterizes me as Don Quixote, careening around the Internet fighting trolls like that one XKCD comic we all love about "someone is wrong on the Internet!"
I don't care if someone's wrong, though. I care if someone's making an unhealthy space for others. Making others feel unwelcome, unappreciated, unintelligent. And when I have power and privilege in a place to work to make it safer and more welcoming, I'm going to do it.
However, there are a lot of factors that can make it a bad call. For starters, if you're doing it alone, or if you don't have a supportive community around you, you're probably going to burn out fast:
(Above quote screen-capped from Justin's article)
This is why I gave a talk at Laracon this year about the magic of Laravel's community; I want to both celebrate the ways it's welcoming, but also continue to grow as a community that is characterized not by its toxicity but by its hope and its kindness.
One other note to consider: not everyone who disagrees with you on the Internet is a troll. Sometimes that person is just bad at considering the opinions of others as valid. Or, some times... you might be the one who's wrong.
So. I think we should all protect our wellbeing by ignoring the trolls as our first response, as Justin mentioned.
I think we should also consider engaging the trolls when we're in a position to do so, and when it serves a broader goal.
I've said these things before, though. And do you want to know the absolute worst response I've gotten?
Apathy.
Conflict avoidance.
Unwillingness to be made uncomfortable in the pursuit of other people's safety.
Each person will have to make their own decision every time you interact with a troll: how will I respond?
If you haven't started with the foundational response that you are not at fault, and this doesn't impinge on your self worth, stop reading this article and go read Justin's article "The Haters" instead.
If you do know you're not at fault, and you just don't have the emotional and mental space to participate in yet another trolling session, do what you need to do to protect your sanity and wellbeing. I have no intention to get in the way of that.
But. If you're comfortable. If you're unafflicted. If you're not often, or even currently, the target of the trolls, it will be tempting to consider only what is easy for you in that moment. And in that moment, I'd ask you to consider whether there was another response--harder, likely, to give--that you can make space for that would make your world, your community, your space a bit healthier, friendlier, less tolerant of assholes, more welcoming to newcomers.
Is there a truth that needs telling? An untruth that needs correcting? Is there someone who might be watching these lies told unchecked in your community? Do you know how to correct "wrong" code loudly but not how to correct wrong behavior toward other humans?
I guess my main goal here is to encourage that, when you have the freedom and ability to address it--even if it makes you uncomfortable--the response to trolls shouldn't always just be ignore the trolls.
Some times we need to fight them, for the sake of the truth.
]]>Here's a video of the announcement: Taylor Otwell - Introducing Laravel Vapor
(image from Yaz Jallad)
There's currently one official way to get your apps into production: Forge.
People have been asking about auto-scaling, etc.
A few years ago I finished a whole product called Laravel Cloud. Hinted at it. Handled auto-scaling. But when I got done I just didn't feel like it was revolutionary enough. Didn't blow me away. Front end built by Steve, whole backend done, but tabled it. Wanted something that blew me away for deploying PHP applications.
Last 9-10 months 40 hours a week.
(image from Yaz Jallad)
https://vapor.laravel.com/
Starting with an example:
Vapor is ready for scale. 5000 users with 2312 requests a second still getting 12ms request times.
Laravel Vapor is a serverless deployment platform for Laravel, powered by AWS. On-demand auto-scaling with zero server maintenance.
Even made small tweaks to Laravel over the last few months to make it all seamless.
.@taylorotwell announces Laravel Vapor, a full-featured serverless management & deployment dashboard for PHP/Laravel #Laracon pic.twitter.com/UwgcffIAvd
— Sara Bine (@sara_bine) July 24, 2019
Google Cloud Functions, Amazon Lambda, etc.
You deploy to their platform but never think about infrastructure. Of course there are actually servers but you never think about them. You don't worry about certs, PHP versions, how much scale my app needs, etc. because my app just scales elastically very quickly.
In the past you might've used Horizon to manage your queues; 20 Horizon processes working queue jobs. Now, you don't even have to think about it. If you get 1000 jobs on your queue they'll be executed within seconds. If no jobs, no workers. It's all entirely elastically scaled.
If no one is using your app, you're not getting charged for it.
Unlimited teams, users, projects, deployments.
Price is $39/mo, $399/year (plus all your AWS costs).
Built around teams from day one. Don't cost extra.
Unlimited teams, users, projects, deployments.
Customize their abilities.
Each project has multiple environments, listing recent deployments for each environment. Each env gets its own vanity URL. E.g. https://snowy-hurricane-12349834324432.vapor.build/ to see your actual app.
Staging domains get no-index header so they're not indexed.
Has a command line tool. vapor deploy production
from your project folder.
Uploads assets to Cloudfront for CDN then kicks off the application build.
Zero downtime, like Envoyer.
vapor deploy staging
Super easy rollback from the UI.
vapor.yaml
Build steps, different config for each enviornment, etc... domain, storage, build steps for each
id: 4
name: vapor-laracon
environments:
production:
database: laracon-deb
cache: laracon-cache
domain: scenery.io
storage: laracon-us-2019-storage
build:
- `composer install --no-dev --classmap-authoritative`
- `php artisan event:cache`
deploy:
- `php artisan migrate --force`
staging:
etc.
Build
steps run locally. Deploy
steps run on AWS.
ASSET_URL
env var injected so you can just use the asset helper in Laravel and it'll point to the right Cloudfront URLs.
Has a button right in the UI for it. You can still access the full app from your vanity URL to keep working on it while your production app is in maintenance mode.
You'll have to do it a lot less than you used to since Vapor knows so much about your app. For example we inject database, cache, queue variables for you. You'll think about them the most for third-party services for Pusher, Bugsnag, etc. and you can manage them in a textbox in Vapor.
Changes to .env
don't change effect until you deploy again, but you can hit re-deploy
to make that happen.
Kinda like environment variables, but your env vars are limited to 4kb and your Passport private keys and other longer things might not be able to work that way.
Secrets are versioned, encrypted at rest, and versioned.
When I rollback, it rolls back the secret that it was deployed with as well.
Secrets are available as environment variables just like any others.
Run one-off Artisan commands against your serverless app. No servers to SSH into! Run them from the UI. See log output from the UI.
HTTP requests, CLI/queue invocations, estimated cost, CLI/queue average duration, HTTP average duration, etc. over the last 24 hours, 30 days. Taylor has 87k of CLI/queue inviations and a few thousand HTTP requests in the last 30 days and showed around $3 on the chart.
Also has alarms. Can configure an alarm to say, e.g., "If I get more than 1k requests per minute for over 5 minutes, Slack or webhook or email me". Http Requests, CLI/Queue invocations, etc.
Metrics show you request duration, usage, queue invocations, and estimated lambda costs #Laracon pic.twitter.com/pv7T04wdOc
— Michael Dyrynda @ LaraconUS 🗽 (@michaeldyrynda) July 24, 2019
Can view/tail your logs rightin Vapor. AWS CloudWatch is not a lot of fun.
Can type a search phrase and it auto updates anything that matches.
Log output in Vapor makes it easy to view and search logs generated by your apps in whichever environment you're deploying to #Laracon pic.twitter.com/gv0r9ArhT2
— Michael Dyrynda @ LaraconUS 🗽 (@michaeldyrynda) July 24, 2019
Fixed size--you can pick the specs. db.t2.micro
for $15/mo, etc... normal Amazon stuff.
Max disk size to scale it up to ($0.115/GB/mo) will auto scale up to your max.
Public vs private. Private live in your network, can't be accessed from your local. But env vars auto injected into your local.
Attaching DB add in vapor.yaml.
Scaling: You can scale the fixed-size database and you can scale it up or down to any other size and it'll automatically adjust. Keep using your app.
Autoscaling, etc. on the DB as well
Alarms on it etc.
Can restore to any point in time in the last 3 days. Name the restored db, pick the time, and it creates a new Db with the same specs back to the old one. Down to the second, nothing about hourly, daily, etc.
vapor database:shell database-name here
Run queries from in there against the DB.
Add a tiny box in the network and you can SSH in there. So you can use TablePlus/etc. to ssh into that box and manage the DB from there.
Similar to DB. Can make Redis clusters directly from the UI with as many nodes as you want in the cluster.
If you don't need a full Redis cluster, Vapor will automatically set up a DynamoDB (serverless, autoscaling) cache for your app.
Attach in yaml, I think.
Redis tooling already installed in Vapor. As soon as you turn it on it already flips Laravel over to Redis without you having to do anything.
Can manually scale up the nuber of nodes without any downtime.
Great metrics on the cache; CPU usage, hit rate, miss rate, etc.
Cache put of the box with DynamoDB. Resis clusters available too 👌 #Laracon pic.twitter.com/03S5wO9fDV
— Andrés Santibáñez (@asantibanez) July 24, 2019
vapor cache:tunnel
(i think) tunnels to 6378 (one short of the usual Redis port), and you can use any Redis GUI app and set up localhost
on that port and directly attach to that cache.
It just works. Maps event source mapping into SQS.
Auto set up to run php artisan schedule:run
command every minute. Always. No config needed. (Uses CloudWatch)
Separate page for domains. Can even purchase domains (AWS Route53) directly from the UI. Automatically makes wildcard certs and DNS zones.
If not purchased through Vapor, just add a domain you own to Vapor. Point to AWS name servers and then you can manage DNS through Vapor, or manage your own DNS on your own and just point certain CNAME records etc.
Domain management in Vapor. Add, buy, configure #laracon pic.twitter.com/NuzVnDtoFA
— Andrés Santibáñez (@asantibanez) July 24, 2019
You can use whatever you want. Vapor auto sets up DKIM etc. for you if it's managing your domain.
Send files straight to S3. Uses "pre-signed URLs". Complicated in the backend so Vapor simplifies it.
Wrote a JS package on npm lets you do vapor.store
Vapor.store(file, {
progress: currentProgress => {
this.uploadProgress = Math.round(currentProgress * 100);
}
}).then(storedFile => {
// storedFile.uuid, .key, .bucket, .extension
});
Uploads these files into a temp directory in your storage bucket. Will be cleaned after 24 hours. Only gets moved into true storage when your backend copies that file from the temp directory into the more permanent directory.
S3 uploads directly with Vapor #laracon pic.twitter.com/9ZcUHDfzwW
— Andrés Santibáñez (@asantibanez) July 24, 2019
You can deploy from your CI pipeline.
In composer.json
you'll add laravel/vapor-cli
in your vapor file, so you can run Vapor from your CI.
Configure your CI deploy step:
php vendor/bin/vapor deploy production
Boom.
vapor test
Spins up Docker with exact Vapor identical build and runs phpunit test
.
Every single thing that he showed in the UI (create DBs, scale DBs, scale cache, etc.) can all happeen in the DB. E.g. vapor database foo
and you pick everything in a UI.
Can even vapor metrics production
.
That's it for now! I'll work on getting better pictures and go back and try to re-organize and re-structure it a bit later, but for now, time to shut the computer for a bit. 👋
]]>So, if you've worked with WordPress before, but you've never worked on the command line or you don't have experience working with Composer, this is for you.
First, let's get you set up with a terminal client, or the application that allows you to interact with your computer's command line.
If you're working with desktop Linux, you already know how to open up a terminal session. No new steps here.
If you're working with a Mac, you can find an application named "Terminal" in your Applications folder, under the "Utilities" folder. Open that, and you're ready to go.
If you want a bit of an upgrade, there's a free Terminal replacement named iTerm2 that most developers prefer to Terminal.
I asked on Twitter about the best way to get a functioning terminal on Windows, and the answers varied, but the most common recommendation was GitBASH, followed by CMDer.
I'm going to recommend GitBASH; you need to have installed Git on your Windows machine anyway, and it comes with GitBASH, so that seems a good place to start. Check out the Git for Windows web site to learn how to install Git and get access to GitBASH.
If you want to upgrade your terminal a little, here's a quick video on how to install and use CMD-er after you install GitBASH: https://www.youtube.com/watch?v=Xm790AkFeK4&feature=youtu.be
If you're running Windows 10, you can get system-level support for terminal access with the WSL (Windows Subsystem for Linux). I don't know exactly how easy it is to set up, but a lot of folks recommended it; here's an intro video: https://www.youtube.com/watch?v=Cvrqmq9A3tA
The first thing you'll want to get used to is how the terminal works. You're going to see a prompt in front of you that looks something like this:
Exactly what you're going to see here will change based on your environment and the theme loaded by your shell, but most terminals will show at least these details:
~
) in the path, that means your user's home directory. On macOS this is /Users/your-user-name-here/
.$
): There will be a character to the left of your cursor that just means: "You can type here". It's often a $
or a >
, but some terminal themes use other characters.If you want to move to another directory, you'll want to use the cd
command. So, if I want to move to the directory /Users/mattstauffer
, I can type cd /Users/mattstauffer
.
Because that path starts with a /
, I'm telling my terminal that I'm defining the absolute path I want to go to. That means "this path I'm defining is at the root of the computer's file system". Like this: Root / Users directory / mattstauffer directory.
But if I were to start it without the /
, that would mean "go to this path beneath the directory I'm already in". So if I typed cd Users/mattstauffer
, and I was already in the /Users
directory, I'd be saying "take me to /Users/Users/mattstauffer
", which, of course, wouldn't work.
For me, all of my web projects live in the /Users/mattstauffer/Sites
directory, which I can shorten to ~/Sites
. So here's what it looks like when I open up my terminal and want to work on my web site:
cd ~/Sites
cd mattstauffer-com
# (which is the same as:)
cd ~/Sites/mattstauffer-com
# (which is the same as:)
cd /Users/mattstauffer/Sites/mattstauffer-com
If you want to list all of the files in your current directory, you'll want to use the ls
command. I prefer adding the -al
flags when I call it, which makes the listing a lot more readable:
ls -al
Here's a sample output:
drwxr-xr-x 28 mattstauffer staff 896 Feb 12 09:46 .
drwxr-xr-x 50 mattstauffer staff 2912 Feb 7 14:07 ..
-rw-r--r-- 1 mattstauffer staff 18 Aug 21 2017 README.md
-rw-r--r-- 1 mattstauffer staff 1286 Dec 20 10:10 package.json
drwxr-xr-x 29 mattstauffer staff 928 Jan 14 09:43 source
You can technically ignore all of the columns except the far right column, which is the name of the directory or file. If you want to know whether it's a directory or a file, look at the far left character of the far left column; if it's a -
it's a file, and if it's a d
it's a directory.
The first, third, and fourth columns are about permissions. The second column is basically useless. The fifth (896, etc.) is the file size, in bytes. Then you get the date, and the time, and then the file/directory name.
So, what exactly is Composer? It's primarily a dependency manager. Your project will define its dependencies--the other projects it needs to have access to in order to do its job--and then Composer will install those projects, or packages, and make them accessible to your code.
You'll use a file at the root of your project named composer.json
to define those dependencies, and that will auto-generate a file named composer.lock
that saves the version of the dependencies you installed so you get those same verisons the next time you install.
Let's get Composer installed.
You can find the instructions to download and install Composer on your machine on the Composer Downloads page. If you're working with Windows, there's a special installer which you can learn about in the Composer intro docs.
The goal is that, at the end of this installation process, you can run composer
from anywhere on the command line and it'll work--which means it's "installed globally" and "in your PATH". My hope is that Composer's installation instructions will be enough to get you there, but if you follow them and this following command doesn't work from any directory on your machine, please let me know on Twitter:
composer -v
Once you have Composer installed, there are a few primary ways you can use it.
If you clone an existing project that uses Composer, you'll see that it has a composer.json
and a composer.lock
file in it. But if you try to run the project, it probably won't work. The error usually looks something like this:
Warning: require(/Users/mattstauffer/Sites/symposium/bootstrap/../vendor/autoload.php): failed to open stream: No such file or directory in /Users/mattstauffer/Sites/symposium/bootstrap/autoload.php on line 17
That's because it's trying to access the files Composer loads, but you haven't created them yet; those files are ignored in most projects' version control, with the expectation that you're going to use Composer to install them after you clone. So, let's install them! Run this command from your project's root directory:
composer install
This command reads composer.json
and composer.lock
and installs all of your required files for you. It'll take a bit, especially the first time you run it, but then your site should just work!
Tip: The files that Composer installs for you go into the
vendor
directory. You may be familiar with NPM and itsnode_modules
directory. Same deal here.
If you need to add a new dependency to your project, or create a project that has a single dependency, you can use composer require packagenamespace/packagename
.
In an existing project, this command will add that package to your composer.json
and composer.lock
files and then install it.
In a new project, that will create your composer.json
and composer.lock
files and add just that package to them.
There's a lot more to learn about the command line and about Composer, but hopefully this is enough to get you up and running with the basics.
cd
: Change directoryls
: List filesNote: As I was about to publish this post that I started on November 27, we made the (somewhat crazy) decision to push back the publishing of Laravel: Up and Running's second edition by about a month so we can cover yet another version of Laravel in it. So, these stats are all for the original 5.7-covering version of the second edition; imagine all of them being even a bit higher once we update it to cover 5.8.
I just finished writing and editing the second edition of my book, Laravel: Up and Running! WOO WOO! Here are a few quick statistics for the work that went into the second edition:
There are a lot more fun stats I could probably come up with but, to be honest, I'm tired of working on this book and I want to see it get published now!
While it's in the hands of the O'Reilly production and QA team, though, I wanted to share a little bit of what it's like to write a book like this. I've often seen folks self-publish a book without any editors or proofreaders or anything, and especially with the second edition of my book, I took basically the opposite approach. So, I wanted to share a bit of who helped me—both the broader roles, but also the specific people.
There was a similar group of people who helped me in the first edition, but I'm going to be writing up my second edition team here because I made a big shift in how I wrote the book that I want to share.
Most of how my editing process worked was the traditional O'Reilly structure. I wrote the book in git, in AsciiDoc, using the MarkdownEditing plugin. Here's what it looks like:
Once I finished writing in any session, I would push it up to O'Reilly's Atlas platform, where my O'Reilly editor could review it and we could both generate previews:
Once I had finished a chapter, or a series of chapters, I'd get high-level editorial insight from my O'Reilly Editor, Alicia Young. These edits would be things about writing style and communication; here's one of her notes from my testing chapter:
The idea of fidelity of tests isn't really discussed elsewhere. Is that something your readers will be familiar with? I admit I had a hard time understanding the distinction based on the examples you give after this statement, so you may want to add another line of explanation further clarifying this term in this context.
She gave me 73 comments on chapters 12-18, for example, most of which were minor wording changes to make a sentence clearer.
I had a group of tech reviewers that I found who were willing to read through the whole book for a small stipend and give any notes they found. These were usually technical issues—ranging from "You keep referencing Eloquent but we haven't had the Eloquent chapter yet" to "What about referencing the @csrf helper here?" to "I don't think this code would actually work."
I can't say I remember exactly everyone who's involved in production, but here's what I know so far.
I know I have a production editor who is, as the name suggests, the editor of the production process. They'll be responsible for getting all the right people lined up to take their crack at the book, from copyeditors (who mainly handle writing-related issues) to indexers (or whatever you call the people who mark the book up for the index) to layout people who handle weird page breaks and stuff like that.
There's a lot that goes on here, and most of it is entirely outside of my world—I just watch the git commits roll in, and at one point I get a PDF of notes from the copyeditor to review.
Outside of the setup that O'Reilly provides, I've also brought on a little bit of help. For the first edition, I brought in a different group of wonderful tech reviewers into O'Reilly's system. I also got help in the form of a few of the early readers giving me some proofreader-style feedback after we released the book, which I was able to incorporate into later releases (thanks to all of you!)
For the second edition, I knew the book was ready for a big refresh (it covered up to Laravel 5.3 and we were now on 5.5, heading toward 5.6). But I just didn't have time. My wife's acting career is taking off, my kids are getting older, Tighten is busy and I just don't have dozens of hours a week to spend updating the book.
But, I had an idea: I would hire someone to handle two primary jobs: combing through the release logs and docs changes of the last few versions of Laravel to make sure we updated everything, and then running all of the code samples in the book in the latest version of Laravel to make sure it worked.
I hired my friend Wilbur Powery to take on this task, and he became my research assistant, sending in pull requests with small modifications to the code samples or some of the documentation, or any time it was more than just a syntax change, leaving me a note inline that I should write or delete or modify a bigger section.
With this help, I was able to essentially ignore the book for weeks at a time while Wilbur would research the next chapter and deliver his notes. Then, once each chapter was done, I would approve all of his minor edits, and then sit down to write or re-write any bigger sections that had changed significantly.
Once that was all done, I printed the entire book and had it bound. I read the entire book cover to cover over the span of a few weeks, poring over every sentence and example with a pen or marker and a huge stack of mini postit page marking tabs.
Once that was done, I entered those changes in manually.
I mentioned this on Twitter, but as I was about to publish this article, we discovered the book wouldn't go to print until early February 2019, which is right around when Laravel 5.8 was going to come out. So, I worked with my editors to add a bit of time into the production timeline so we can update it for 5.8 and then push it out in early March.
Here's a really rough timeline of events—(without dates, but you can at least see what the order is like:
I think that's all for now. Off to get Wilbur and me started with making a list of edits for 5.8!
]]>Laravel Telescope is a new application debugging assistant from Laravel, written by Mohamed Said and Taylor Otwell. It's open source, free on GitHub, and will be released likely next week.
You'll pull it into your applications as a third-party depedency via Composer.
Once you install Telescope, you'll access it by visiting the /telescope
route of your application.
If you've ever used Clockwork or Laravel Debugbar, think those but as a standalone UI, and with superpowers.
Telescope is comprised of a series of watchers that "watch" every request that comes into your application, whether from HTTP requests, from the command line, from a scheduler, or from a queue.
These watchers capture all sorts of information about these requests and their associated data--things like database queries and their execution time, cache hits and misses, events fired, mail sent, and much more.
There are tabs in the UI for inspecting each of the following, which each reflect a "Watcher":
Let's walk through each of these tabs and what the let us inspect. Each of these tabs shows a list page and then allows you to dive into a detail page for any given item.
This tab allows you to see all of the HTTP requests that come into your application. You'll be able to inpect all the HTTP requests and all sorts of useful info about each request.
Each request page also shows any data it has from other watchers that are related to this request; for example, all the database queries and how long they took; which user is authenticated for this request; and more.
The commands tab lists all the commands that have been run and their exit codes. When you dive in you can also see all of their arguments, options, and related items.
Lists the scheduled tasks that have been run. On each task's detail page, see all of their scheduling information like their cron schedule (e.g. * * * * *
).
The jobs tab lists out all of the jobs that have run or all running. It's similar to Horizon, but Horizon is Redis-only and isn't just a UI, and it also interacts with how your queue workers are running. Telescope, on the other hand, is just a UI, but it also works for all queue drivers.
On the jobs list page, you'll be able to see the job name, which queue and connection it ran on, its status, and when it happened.
On the job detail page you'll be able to see all of that data and more: hostname, job's fully-qualified class name, connection, queue, # of tries, timeout, tags.
Jobs are auto-tagged with any attached Eloquent models (e.g. App\Video:1
), with the user if there's a user attached, etc.
Tags.
Items like requests, commands, etc. will be automatically assigned tags by Telescope (e.g. if it's a request by a user, it gets automatically assigned the tag
Auth:1
if User 1; you can click that tag and it'll filter just their tagged items, etc.)
Just like with HTTP requests you can see all sorts of info related to this job like database queries it fired, jobs this job kicked off, and any logs it generated.
If you kick off a closure instead of seeing App\Jobs\RenderVideo
you see Closure (web.php:43)
showing where it was defined.
New queued closures.
Taylor contributed to a new library to bring back queued closures, which Laravel used to have but went away a while ago. With these contributiosn and this new library, if you
use
a model to import it into your closure, it'll store the model ID, not the entire model, which is much better (and what queue classes already do). So, queue closures are back!dispatch(function () use ($video) { // do stuff in a queued job });
This will serialize the closure with a hash along with it; this is because with queueing closures, someone could previously modify your queue event to inject anby arbitrary PHP to be run through it, which is not good! Now it hashes it and checks your code agains the hash.
Closure is serialized as a long string which includes the entire code and a hash of it (uses code similar to the signed URLs).
Logs all exceptions and allows you to inspect each. This will show you similar data to the other tabs, like hostname, type, request, tags, authenticated user.
But you'll also see the location within the code, highlighted, with a few lines of code above and below it; and you'll also get a full stack trace.
You can also get a link to an exception detail page from the request in which it was thrown.
NOTE: In many tabs, if you're on an individual page (e.g. the page for a given exception) you will get a link to the request page that generated that one
If the same exception happens multiple times, they'll get grouped on the list page, but you can still drill down to individual exceptions from the exception show page.
The logs tab shows you the basic log message, level, and when it happened for all log items.
When you visit the individual detail page for the log item, you can see more information including any context data you passed to the log items (as an array).
"A little nicer than digging through raw text files".
If you pass context to your log items with the array , you can see all that data, see the request that triggered it, which user triggered it. "A little nicer than digging through raw text files."
"This is one of my favorite features"
If you use the dump()
method in your code, and you have it this dump screen open in Telescope, you'll see the dumps in Telescope but not your actual application. This gives you dd()
style output of your data without it messing up your normal page load. Each dump also links to the request which generated it.
If you leave the dump screen, all of a sudden your dumps show up in your browser again.
List of all your DB queries--like the debug bar. How long they took, jump in and view the full query, which request triggered it, etc.
Nice formatted view.
Can set a boundary for what makes a query "slow" in your service provider; once something takes longer than that it's tagged as slow
and also marked as red in the list page.
NOTE: Super slick and fast search on every list page. Searches tags and other stuff.
You can see create, update, delete events; shows the changes that were made, etc.
Shows a list of all your events. You can see which events were broadcast with a tag; see a list of all listeners and dig into which called.
Shows a list of all emails that were sent; who the recipients are; when it happened; whether it's queued and then when the queue kicks it out. Can see the email subject, and when you dig into it you also see a preview of the email like MailTrap.
Can even download the raw .eml
file and open it in your client of choice.
Shows all notifications, what type they were, etc.
No previews since some notifications aren't preview-able, but if it's a mail notification you'll also see it there.
If notification was queued, you can also see it under the Jobs section on the request. Lots of angles to get much of this data.
Shows cache hits and misses abd updates etc.
Shows the key, the data, when it expires, can see the request that triggered it and also on the request page you can see all the cache hits/misses for taht request
Similar to cache
How long they took, when it happened, which request initiated, etc.
Get info about the authenticated user on any entry on any tab
Can have a list of emails in telescope service provider who can access it in production
Or use the viewTelescope gate to define whether a given user can access it
You may not want to store everything that happens in produciton, so you can , in your Telescope service provider, run Telescope::filter(function ($entry))
.`
default filter:
function ($entry) {
if (local) { return true; }
return $entry->isReportableException ||
$entry->isfailedJob() ||
$entry->isScheduledTask() ||
$entry->hasMonitoredTag();
}
But you can modify this if you want.
Go into the radar button and say monitor a tag. You can say monitor Auth:1
in the UI.
In prod doesn't log requests, but if you monitor for example Auth:1
you now see all of their requests logged until you un-monitor it.
NOTE: Horizon and Telescope play nicely together, if you're using Redis queues.
Schedule job prunes stale entries from Telescope. Can run nightly if you want to delete stuff older than __ hours.
Also a setting in config/telescope
Can enable or disable any of the watchers. E.g. Watchers\CacheWatcher::class
can be disabled.
Also a TELESCOPE_LIMIT
which is 100 by default; means keep 100 queries at a time, 100 Redis, etc. LOTS of this is configurable by env
.
Telescope can run locally and on production and has built-in authorization and tools for protecting private data. It provides access to similar data from multiple different angles, has a bevy of configuration options, and allows for robust tagging and filtering.
Consider putting it on a separate database.
Taylor mentioned on Twitter later you can add filters to ensure private data doesn't get logged.
Has a dark mode that you enable with Telescope::night()
(probably in a service provider somewhere?)
These are my notes that I took during the announcement on 2018-07-25. I hope to go back later and update this after a more careful re-watching of the YouTube recording that's now up, so I could get some of my code samples more exact and catch anything I missed.
If you notice anything I missed or got wrong, please let me know on Twitter! And please check back in a few days so I have time to fix this up. :)
UPDATED: 2018-07-27 7:00am CST
Taylor just gave his keynote at Laracon US introducing Laravel Nova. He's since released a YouTube video and a Medium post introducing Nova from his perspective, but it's such a huge project that there's going to be a lot to write from a lot of different perspectives.
So, here is everything I've learned about Nova so far.
Laravel Nova is a new tool in the line of Laravel Spark, Laravel Cashier, and Laravel Passport that you can pull into your Laravel apps. It's not available for purchase yet, but will be in about a month.
Nova is an admin panel tool. It's not an admin panel generator; it's not generating files that you then need to modify. And it's not a CMS; many of the features you expect from CMSes don't come out of the box, but it's also endlessly more flexible and developer-focused than CMSes. So the best way to describe it is as an admin panel tool, but it's definitely head and shoulders above everything else that exists in this space.
You're going to use Nova to build administrative dashboards for your apps. But Nova is not necessarily a part of your app (entangled, as Taylor put it) like Spark was. Rather, it's a standalone product that allows you to build super quick management tooling around your data. You do pull it into your codebase as a package, but you don't have to touch your existing code at all. It does have the ability for you to modify it enough to allow different types of users to log in, so you could actually build some relatively simple SaaSes purely with Nova; but most people will have a Laravel codebase that is entirely separate from Nova, and use Nova to build the admin panel at a URL something like myapp.com/nova
.
I haven't run this by Taylor, but I would say that, in theory, you could build Nova-based admin panels for non-Laravel apps. All it needs is Eloquent models and access to your database (and, if you want to share users with your other app, you have to make them able to share password hashing algorithms). So if you have, for example, a Rails app that you're using Sequel Pro to administer, you could throw up a Laravel app with only Nova installed on a subdomain of your app, build Eloquent models for the Rails database tables, and then administer the same data with Nova.
At its core, Nova is a package you pull in with Composer that makes it easy to attach "Resources" to your Eloquent models. Imagine you have a list of users in your users
table with a User
Eloquent model; you're now going to create a User
Resource class which attaches to your model (I think there's a "model" property on the resource that allows you to do this). The moment you create a Resource, it's registered into Nova and gets added as one of the editable chunks of the admin panel.
The admin panel is a single-page Vue app (using Vue Router), with Tailwind for styles and Laravel JSON APIs to serve all the data.
By default, every resource gets your basic CRUD treatment; list users, create user, edit user, delete user. Each resource will get a link in the left navigation.
You can customize all sorts of things in the app--which fields are on a resource, "cards" that show little bits of custom data, "resource tools" on a resource that allow you to add bigger chunks of functionality like "tracks its version history" to any given resource, "sidebar tools" that allow you to add larger chunks of custom functionality, and much more.
But at the core, you're using Resources--most attached to Eloquent Models, but some just free-floating--to generate CRUD quickly and easily.
And importantly, to set it apart from most of the major CMSes, all of its configuration is in code, not in the database.
Each Resource will be its own class. I don't have actual sample code, but I think it's going to be a bit like this:
<?php
namespace App\Resources;
use App\User;
use Illuminate\Nova\Resource;
class UserResource extends Resource
{
protected $model = User::class;
public function fields()
{
return [
ID::make()->sortable(),
Text::make('Name')
->sortable()
->rules(['required']),
Gravatar::make(),
];
}
}
Each resource has a list page, a detail page, and an edit/create page. Here's a sample detail page:
A lot of common fields come enabled out of the box. You'll see things like Text, ID, Date, etc... but you can also build your own field types in code and then use them in your resources.
Most fields are just a single UI item that syncs with a column in a database; for example, Text shows an <input>
and matches to a VARCHAR
-style column in your database. But some fields may have one UI element for multiple columns, or multiple UI elements for one column. Some fields might not have database columns backing them at all (if you're a Vue developer, these fields are a bit like computed properties vs. data properties).
Fields can be shown and hidden based on the view (list view vs. detail view, for example), based on the user logged in, or based on anything else you want to customize. More on that later.
This isn't an exhaustive list, but here are the types I know exist:
If you want to group multiple fields into a little mini panel within your forms, you can do that.
public function fields()
{
return [
// definition of the name field
// definition of the email field
new Panel('Address', [
// definition of the address field
// definition of the city field
// definition of the state field
// definition of the zip field
])
];
}
You can also pull out the definitions of a group of fields to a private method within your Resource class to clean things up a bit; just use the $this->merge()
method there:
public function fields()
{
return [
// definition of the name field
// definition of the email field
$this->addressFields(),
];
}
private function addressFields()
{
return $this->merge([
// definition of the address field
// definition of the city field
// definition of the state field
// definition of the zip field
]);
}
One idea Taylor had for a way to organize some of the more complex field definitions is to have invokable classes that represent the way to get that. So, rather than writing a closure inline in this thumbnail()
method to define how to retrieve a movie poster based on the given movie title, he created a one-off class that does it instead:
public function fields()
{
return [
Text::make('title'),
Avatar::make('Poster')->thumbnail(new RetrieveMoviePoster($this))
];
}
Then his class looked something like this:
class RetrieveMoviePoster
{
public function __invoke($movie)
{
return Cache::remember('movie-poster-' . $movie->title, 3600, function () use ($movie) {
// This code looked up the movie's poster URL by the title, and then returned it
});
}
}
So when the Nova UI looked for this field, it didn't even have a "poster" in the database anywhere; it just passed the movie to his class, which looked it up, cached it, and returned it.
Another example of a field that's not backed by a database property would be an icon
field on a user.
Let's say you're using Gravatar on your application's frontend to show the user's image; and what if you wanted to also use Gravatar to display their image in your admin panel?
Gravatar works based on the user's email address, so it's not a separate database column. But you can add a Gravatar
field to your resource that grabs the resource's email address, looks it up on Gravatar, caches the resulting URL, and then displays it as one of the fields in the Nova admin panel.
File fields can specify what disk they're on and other useful pieces of information for managing files. Taylor also gave examples of how you may want to allow for a file upload in the UI and capture not just the file itself, but also its original name and size--which is one of the examples I talked about where a single UI element can send to multiple database columns. I'll try to show that once I get a chance to look over the YouTube again.
Files also have a prunable()
method you can chain onto their definitions, which means that if I delete the entry in the database, Nova should delete its backing file as well:
public function fields()
{
return [
File::make('document')->disk('web')->prunable(),
];
}
Photo and Video fields are like File fields but with some special treats like image preview and upload inline.
If you have a collection of fields for addresses:
public function fields()
{
return [
Text::make('Address'),
Text::make('City'),
Text::make('State'),
Text::make('Zip'),
Country::make('Country'),
];
}
You can replace the Address
field with one of type Place
and it will hook into an Algolia address auto-completing service that will let you pick the right address and fill in all the other address fields automatically once you pick it.
public function fields()
{
return [
Place::make('Address'),
Text::make('City'),
Text::make('State'),
Text::make('Zip'),
Country::make('Country'),
];
}
If you have fields that store one way but should display another way, you can format its output in a Closure:
Text::make('Size', 'size', function ($value) {
return number_format($value / 1024, 2) . 'kb';
});
In this example Taylor gave, he's storing the file size as bytes but wants to display it as kilobytes.
Fields can also define their own validation rules for update, create, or both.
public function fields()
{
return [
Text::make('Name')
->rules(['required'])
->creationRules(['other rules here']);
->updateRules(['other rules here']);
];
}
These validation rules can use any of the validation you're used to in Laravel--both those that come out of the box and also your own custom rule objects and closures.
You can set fields to only show up on edit/create forms but not lists with onlyOnForms()
; you can run hideFromIndex()
to hide them from lists; and any field can be hasMany()
to allow you to use a multiselect to relate it to a group of other fields.
You can add sortable()
to allow this field to be sorted on list pages.
You can hook onto various actions like Delete and Store using closures or classes:
Image::make("Photo")
->store(function () {})
->delete(new DeleteImage);
Any fields that end up showing a dropdown (e.g. most relationship fields) can get long and unwieldy as dropdowns once you have a lot of entries. You can chain on searchable()
and you get a slick autocomplete search interface.
Actions and Filters apply to a resource. Filters are things like "just show me the published posts"; actions are the things like "delete all selected posts".
Actions are PHP classes that perform a given task on a collection of items. Each defined action needs to be able to take a collection--even it's just a collection of one--and act on it in its handle()
method.
To register actions, add an actions()
method on your resource, and return your actions in there:
class PostResource
{
// ...
public function actions()
{
return [
new Actions\Publish,
];
}
}
These actions will be options you can apply "to all checked" on a list page or "to this item" on the detail page.
You can also mark your actions as ShouldQueue
, and Nova will track the progress of those queued actions in the interface and show you when they complete. Here's a sample action:
class DoStuff extends Action implements ShouldQueue
{
use InteractsWithQueue, Queueable, SerializesModels;
public function handle(ActionFields $fields, Collection $models)
{
foreach ($models as model) {
// do stuff to model
}
}
}
Here's what it looks like to trigger an action:
If you want the action to look scary and have red buttons, have the class extend DestructiveAction
instead of Action
.
If you've made your resource auditable by adding the Actionable
trait, you'll get an actions audit panel on its detail page, and that's where it shows state of queued actions.
Here's what it looks like when a ShouldQueue
action is still running:
Here's how to generate a new action:
php artisan nova:action ActionName
I'm not 100% sure how this works, but my best guess is that you define a fields()
array in the action's class and when someone runs that action, they get a popup and have to enter those fields?
Filters are similar to actions; they'll show up in a dropdown on the index page and let you show "only items that match this filter".
You add filters the same way you add actions:
class PostResource
{
// ...
public function filters()
{
return [
new Filters\PublishedPosts,
];
}
}
I remember that each fitler has a method of some sort that will get a query builder instance and can modify it. Something like this, maybe?
class PublishedPosts extends Filter
{
public function options()
{
// can't remember if this exactly the right shpae but something like this
return [
'Published' => 'published',
'Un-Published' => 'unpublished',
];
}
public function apply(Request $request, $query, $value)
{
if ($value == 'published') {
return $query->whereNotNull('published');
}
return $query;
}
}
Here's what it looks like to apply a filter:
php artisan nova:filter FilterName
Lenses are a more radical view of a resource. Rather than just modifying its fields, lenses allow you to build an all-new view, with your own subset of query parameters and selects and joins and custom fields to make it exactly the way you want to look at that resource.
A lens is a subsection of a resource; imagine having a Users page and wanting to have a page where you just look at your paying users, with custom tally fields based on their monthly revenue.
Something like this, which I copied from Taylor's Medium post:
class MostValuableUsers extends Lens
{
public static function query(LensRequest $request, $query)
{
return $request->withOrdering($request->withFilters(
$query->select('users.id', 'users.name', DB::raw('sum(licenses.price) as revenue'))
->join('licenses', 'users.id', '=', 'licenses.user_id')
->orderBy('revenue', 'desc')
->groupBy('users.id', 'users.name');
));
}
}
You can also have fields()
and filters()
and actions()
methods on your Lens class, just like on resources.
Here's how you visit a lens:
And here's what it might look like:
All Resources can be searched from their list pages. You can customize which fields are searchable customizing the $search property on the resource (public static $search = [searchable fields]
) and by default Laravel uses basic Eloquent whereLike
searching.
If your model is Scout-backed (meaning its entries are indexed in Algolia or something like it), Nova will read the Searchable
trait and now use Scout for all your searches instead of Eloquent.
You can set the globallySearchable
property to true either on your base Nova Resource or just on individual Resources, and Nova will show a global search box up in the top nav bar. Type in there and you'll get results across all of the globally searchable Resources, grouped by their Resource type.
When you have many related items on a detail page (e.g. post has many comments), they get their own little panel and it's a small version of the list for that item. It lists all comments for this user, and when you use that panel's search box, it keeps that search scoped to just that user.
To track a list of the changes made to any resource through Nova, add the Actionable
trait to the user (resource? think it's user).
Nova has a robust and granular ACL/Authorization scheme. First, Policies for a given model will be automatically read and registered as the access control rules for its connected resource. Nova both updates the UI according to someone's access permissions, and protects the backend and routes from any nefarious attempts to make not-authorized changes.
Nova respects the usual policies, but there are also new conventions you can ues as methods on your policies; "addRelatedModelName" (e.g. "addComment"), "attachRelatedModelName" (e.g. "attachRole"), or "attachAnyRelatedModelName" (e.g. "attachAnyRole"). addComment
is for hasMany; attachRole
is for many to many where you might be willing to attach some roles but not others; and attachAnyRole
is where you want to approve or deny the entire ability to attach roles.
There's also a method named canSee
that can be defined in quite a few places. I dont' have a full handle on all the places you can use it, but I know you can chain it after actions and filters and likely anything else you register in the resource:
class PostResource
{
public function actions()
{
return [
(new Actions\DeletePost)
->canSee(function () { return request()->user()->isAdmin; });
];
}
}
You can also add canRun()
methods to actions that defines not whether they can see the entire action but instead allows you to define whether they can run a specific action in a specific context; this is good if the user should see the action but only be able to perform it on a subset of the items.
You can also use $this->when()
to wrap around items in a resource's fields()
list to make a conditional show; you can make it conditional based on ACL or really anything else.
public function fields()
{
return [
$this->when('some boolean here i think', function () {
return [
'field definitions here that only run if this when is true'
];
}),
];
}
There are three types of metric: value
, trend
, and partition
.
To make a value metric:
php artisan nova:value ValueName
Metrics show up as cards on your resource dashboard; you'll attach them in the cards()
method for that resource. But I think you can also attach them to resource lists or even resource detail pages.
A metric has a defining class with a calculate()
method. You'll get passed the request and you can define how to count the metric at any given point in time (e.g. count all the users that existed at this point).
For a value metric, it'll then generate a big number--how many users signed up in the last 30 days, for example.. As you can see, this count()
method takes parameters passed via the API (related to which time period you've selected, for example) and then counts the number of entries in the given model that match for that request's parameters.
class TotalUsers extends Metric
{
public function calculate($request)
{
return $this->count($request, User::class);
}
}
You can also define the possible time ranges for it to calculate, and that shows up in a dropdown:
class TotalUsers extends Value
{
public function ranges()
{
return [
30 => '30 days',
// etc.
];
}
}
You can see how these ranges impact the view here:
You can also make a trend metric:
php artisan nova:trend NewUsers
Trends aren't about "how many users in the last month" and instead "give me a line graph, per day, over the last month."
class NewUsers extends Trend
{
public function calculate($request)
{
return $this->countByDays($request, User::class);
// or
return $this->sumByDays($request, License::class, 'price')->dollars()
}
}
You can define how long to cache these lookups, since they could be computationally heavy:
cacheFor()
return now()->addMinutes(5)
And to make it the third type a pie graph, modify your trend metric:
public function calculate($request)
{
return $this0>count($request, user, 'active')->label(function ($label) {
// switch and return nice label for each
});
};
Cards are the individual boxes, like those the Metrics show in. But you can make other cards, and register them. Taylor didn't go into much detail here but he said you could create them; I would bet, like all other custom tools, you'll have something like php artisan nova:card CardName
and it will make a Vue file and a Controller for you to use to serve that card.
Then you'll register those cards--and your metric cards--in the Resource, in its cards()
method. You can modify those registrations to make them wider with the width()
method.
public function cards()
{
return [
(new MetricThingOrWhatever)->width('2/3'),
new otherMetricThing
];
}
Nova understands and honors soft deletes.
If a model is using soft deletes, you'll get a new set of tools. The delete action will now also have a "force delete" action next to it. You will get a new filter that adds "with trashed" and "only trashed", and when you're looking at a trashed item, the trash can turns into a "restore" button that undeletes it.
There is a lot of cool stuff you can do to customize how Nova handles many-to-many relationships, including defining which pivot fields should be customizable when users attach records. When the user attaches a record and Nova expects a custom pivot field (e.g. "notes") it will pop up a modal asking for that field as soon as you make that attachment.
Nova also handles polymorphic beautifully; to add a new polymorphic comment, Taylor showed the "new comment" field asking you first which type of commentable you'd like to comment on, and then once you picked "Video" it gave you a list of videos you can comment on.
The four types of customizable tools are sidebar tools (often just called "tools"), resource tools, fields, and cards.
You generate custom tools using an Artisan command of some sort. Each time you generate a custom tool, it will create a folder for that tool in the nova-components
folder.
Each sidebar tool you create adds a new entry to the left nav, and gets its own entire page for you to work with.
There will be a new Tool.vue
file that represents that tool's view, and I assume a controller as well to provide it data.
You'll register this using the tools()
method in the Nova Service Provider.
A resource tool is a custom panel attached to a resource. Imagine wanting to show payment history for a user or some sort of complicated sentiment analysis based on their last four customer service interactions. Just like sidebar tools, you'll get a Tool.vue
that you can customize to your heart's content.
You'll register resource tools by adding them to the tools()
method in the Resource class. You can even customize them per resource; Taylor gave the example of a showRefunds
method that would let you customize your StripeInspector
resource tool depending on which resource imported it:
class User extends Resource
{
// ...
public function tools()
{
StripeInspector::make()->showRefunds(true)
}
}
I believe that showRefunds()
method is magical, and will be passed down to your Vue component as field.showRefunds
.
Cards can be placed on the dashboard (I assume by adding them to the cards()
method on the Nova Service Provider) or on the list page or detail page for a resource (I think list page would be the cards()
method in the resource file; not sure how you add it to the detail page.)
Just like the other tools, you'll get a custom file; I believe this will be Card.vue
and you'll be able to define the contents and behavior of the card there.
When you generate a custom field, you'll get three Vue components: one for showing that field in a list ("index"), one for showing it on a detail page ("detail"), and one for creating/editing it ("form").
There are also methods you can customize (in a PHP class for it, I think) that allow you to set default values; there are also hooks of some sort for handling what to do with they update the data and other special events.
Nova will be $99 for companies making less than $20,000 per year and $199 for companies making more.
There are keyboard shortcuts, so if you're viewing a resoure and type e
you just start editing it.
If a user tries to save something that was modified after they opened it, they'll be blocked (so they don't overwrite anything someone else does).
Nova has your usual checkbox in the corner of the list page saying "Select all", but it also has a clever second one named "Select all matching". That way, if you've done a search and you want to take action on every item that matches that search, you can do so even if those items span across more than one page.
Nova stores all your dates and times according to the server time, but it converts them to your local (either based on your browser, or, if you configure it this way, based on a stored per-user time zone) when it's displaying. And when you edit those dates and times, you edit them in your own local and Nova converts them back before it saves.
Simple user interface elements like the "subtitle" in search are customizable, and you can reference related items. Taylor gave the example of wanting a book to have its author name in the search subtitle; he set the $with
property on that resource to be an array with ['user']
as its contents to eager load the user, and then set the subtitle using something like this: $subtitle = 'Author: ' . $this->user->name;
.
Check back soon! I'll update this as soon as I learn more, get my battery charged, and get my brain functioning again! I plan to write a bunch more soon about custom sidebar tools, custom resource tools, custom cards, and custom fields.
OH YES one more thing: we're building a web site to help you share your custom sidebar tools, custom resource tools, custom cards, and custom fields, and more:
Coming soon. I promise. It's gonna be great.
]]>Here's a quick walkthrough of how to set up a MySQL 5.7 testing database, locally in Codeship and without needing to rely on RDS.
Quick note: Codeship makes these environment variables available for you to use:
MYSQL_USER
MYSQL_PASSWORD
They also claimed to have a MYSQL_PORT
variable, but I tried and found it didn't work. We'll be referencing these variables later.
First, we'll need to run a script to "install" MySQL 5.7 on our testing instance; we'll then add a test
database that our scripts will connect to.
Add the following lines to your setup script, somewhere before your migrations:
\curl -sSL https://raw.githubusercontent.com/codeship/scripts/master/packages/mysql-5.7.sh | bash -s
export PATH=/home/rof/mysql-5.7.17/bin:$PATH
mysql --defaults-file="/home/rof/mysql-5.7.17/my.cnf" -u "${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -e "create database test";
You'll now have a local instance of MySQL version 5.7 with a test
database, ready for your app to connect to.
This will depend a bit on the framework or language you're working with. You can learn a little about how Codeship handles environment variables with Ruby on Rails and Django here: https://documentation.codeship.com/basic/databases/mysql/
But if you're working with your own environment variables, like I do in Laravel, here's how to get your variables to reference theirs.
First, edit the Project Settings (in the same place you were to edit your setup script) and choose the "Environment" tab instead, which will leave you at https://app.codeship.com/projects/YOURPROJECTNUMBER/environment/edit
For all of your environment variables on the left (in my case, things like DB_HOST
and DB_PORT
) you'll map them to either a static variable or one of the environment variables Codeship makes available (like $MYSQL_USER
). You can see the values you'll want to use below:
As you can see, we need to connect to 127.0.0.1:3307
with user $MYSQL_USER
(the dollar sign tells Codeship to pull the pre-existing environment variable with that name) and password $MYSQL_PASSWORD
, and we'll be using the test
database we created in our setup script.
That should do it! Restart your build and hopefully you're good to go. If you have any trouble, you can either hit me up on Twitter or email Codeship support.
]]>I've spent the last few months trying to gather information about what makes people say this, and also stories of folks who are already successfully working with Laravel in enterprise context. I put out a survey to collect stories, and wrote and delivered a talk (which I hope to give again soon) titled "Laravel and the Enterprise."
As a part of my goal of helping dispel this myth that Laravel is bad for enterprise work, I wanted to hear not just from success stories, but also from people who were having trouble. I put up a "Help me!" form and promised that, whenever I can find time, I'll respond to those messages and help folks with any challenges they're running into.
Some of these questions can be very practical, and I can respond with some code or a package, or even by working with the submitter to make a pull request to Laravel core or the Laravel docs.
But some of these questions are architectural, and for those I can almost never make a great recommendation without knowing more than I can learn from a "Help me!" form submission.
So, instead, I just give the best advice I can based on the information I have in front of me. Remember—this is advice to this person based on the information I know about them, and if this advice doesn't work for your project, then it's not for you.
Here's our first submission: an anonymous Laravel-based project that allows their tenants to establish their own white-labeled storefronts selling a few specific pieces of clothing. We have scale, taking money, regulation and reporting, dev/ops complexity, pain-if-site-goes-down, and quite a few more of my "characteristics of enterprise projects", so while you might not think of all e-commerce as "enterprise", I think there's at least enough here for it to be interesting.
Without further ado, let's go!
Note: I've modified the original request to try to remove any identifying information, and also fixed any typos.
FROM: [Redacted]
TO: Matt Stauffer
SUBJECT: Help Me!
This project was all about creating endpoints initially, and then we added in checkout/ecommerce, queues for creating orders, sending emails etc. We extracted an internal package as the codebase for checkout as a separate laravel app.
There are a few problems:
Despite caching, we find that we have to up our servers when we are about to launch a new tenant as otherwise our app cannot handle all the traffic. I am not sure on how to optimise the app. We have already done the basic stuff (caching, indexing, etc.)
The code is getting a bit messy and it's harder to keep track of what's happening—not sure if we need diagrams, etc. but if we do, how do we maintain them, where/how do we create them in the first place?
Sometimes because of rushing I have ended up adding code just to get the job done instead of thinking about it.
Initially it was a simple CRUD-ish but evolved to be more of a monolith... and we were thinking of moving to microservices but that has their own cons.. too much faffe around between API Gateway and microservices API... and versioning which means another layer of complexity.
Maybe DDD would be better? But we tried that using a module and it also doesnt feel right—e.g. too many extra folders (for events/listeners etc. being created which we might not use)
END OF MESSAGE
Have you taken a look at what’s slowing down?
If it's the database, you may find that certain behaviors (writes vs. reads, commerce vs. basic page content) should be split out to another database. Or maybe you move the entire thing to a service like RDS. You can also be more selective about your caching, possibly putting most of your reads into an in-memory cache so only write operations have to directly hit the database. You may even want to look into full-page cache, serving just static HTML to the majority of your visitors on the majority of your pages.
If it’s the application server (memory or processor), do you have any complex processes you can identify that are either optimizable or moveable to a queue or a microservice?
If it’s access to an external service, consider throwing a cache or a microservice in front of it.
You may want to start with a simple Markdown document sharing what your conventions are for what goes where. This could also have a sort of mind map in outline form of the bigger pieces of code (e.g. models vs. controllers vs. service classes) and any meaningful organization or conventions or groups within each big section.
If you want to go bigger than that, you may want to generate database and UX diagrams. If so, use whatever tool is simplest (the Omni tools are great but pricy) and then store it in whatever your preferred documentation tool is. I’ve seen folks use GitHub, wikis, Basecamp, and more.
Sometimes messy code doesn’t need more documentation but instead refactored code. The more your code is comprised of small, simple classes and methods and functions, the less likely it is to be hard to follow—especially with someone whose IDE lets them click on a method or class and navigate straight to its definition.
Spiking is fine. The question is, do you ever come back and refactor it later?
We all have to write code quickly some times. But if you can’t convince whoever determines your schedule that it’s worth refactoring and testing and cleaning up that spiked code, you’re just asking for the whole thing to melt down in six months. Learn how to convince your boss now that addressing technical debt early and frequently is core to the business’ success.
Don’t go for microservices as a massive refactor. Rather, understand what they provide and what they don’t; and when you’re in a situation that might merit a microservice, ask yourself the question, are the benefits to be gained here worth the potential costs?
For example: You have an external API that you’re calling and their API is awkward and slow. Sometimes you can address this with a clean API client you write in PHP and some caching. But some times it’s not just that—It’s also that the timing of their data availability and their rate limits will make it impossible for you to get the data when you need it, even with a clean API client and caching. Then? Microservice time. But don’t jump straight to microservices. You’re right; there are a lot of costs that come along with them. Don’t reach for them just because they’re “better”.
Here’s what you should learn from DDD: Use language in your code that mimics the language your business/product people use. If you have a relationship on a model, for example, and the model it’s related to is the “User”, but in your business’s brain that relationship is to the “Trainer”, then name it trainer.
None of the tooling folks have proposed in the DDD space, and few of the conventions, are worth your time. The concepts are often (but not always) good but those tools don’t make DDD. And neither do those conventions.
Convention-wise, my advice is to always stick with the Laravel conventions unless they cause you pain. Then find the best tool to meet the pain you are feeling at that moment.
I hoped this helped you out! Are you looking for advice like this on your enterprise Laravel application? Get in touch!
]]>For what it's worth, I'm not a big fan of LOC as a measure of any importance, but it can at least give us some broad foundations to use to talk about broad differences in project size. If you were to ask me, I would say we shouldn't even think about it. But we don't always have that luxury.
If you don't want to read a half dozen options, use PHPLOC. You can find a longer description below, but here's the quick start guide:
cd Sites/
wget https://phar.phpunit.de/phploc.phar
php phploc-4.0.1.phar --exclude vendor --exclude node_modules myprojectnamehere/
Grab the Non-Comment Lines of Code
and Logical Lines of Code
numbers; they'll be your most useful comparisons across projects.
Note that you can also exclude framework-specific cache and log directories and whatever else helps you get the best number.
OK, let's look into all of the options. Please note that some of these tools count all lines of code, not just PHP. When possible, I've passed filters to them to just count PHP files.
PHPLOC is a project from Sebastian Bergmann, the creator of PHPUnit, which gives simple and easy LOC counts contextualized for PHP.
PHPLOC also gives other PHP-specific metrics, like cyclomatic complexity, number of classes, average class length, average method length, and more.
Here's a sample PHPLOC report:
$ phploc src
phploc 4.0.0 by Sebastian Bergmann.
Directories 3
Files 10
Size
Lines of Code (LOC) 1882
Comment Lines of Code (CLOC) 255 (13.55%)
Non-Comment Lines of Code (NCLOC) 1627 (86.45%)
Logical Lines of Code (LLOC) 377 (20.03%)
Classes 351 (93.10%)
Average Class Length 35
Minimum Class Length 0
Maximum Class Length 172
Average Method Length 2
Minimum Method Length 1
Maximum Method Length 117
Functions 0 (0.00%)
Average Function Length 0
Not in classes or functions 26 (6.90%)
Cyclomatic Complexity
Average Complexity per LLOC 0.49
Average Complexity per Class 19.60
Minimum Class Complexity 1.00
Maximum Class Complexity 139.00
Average Complexity per Method 2.43
Minimum Method Complexity 1.00
Maximum Method Complexity 96.00
Dependencies
Global Accesses 0
Global Constants 0 (0.00%)
Global Variables 0 (0.00%)
Super-Global Variables 0 (0.00%)
Attribute Accesses 85
Non-Static 85 (100.00%)
Static 0 (0.00%)
Method Calls 280
Non-Static 276 (98.57%)
Static 4 (1.43%)
Structure
Namespaces 3
Interfaces 1
Traits 0
Classes 9
Abstract Classes 0 (0.00%)
Concrete Classes 9 (100.00%)
Methods 130
Scope
Non-Static Methods 130 (100.00%)
Static Methods 0 (0.00%)
Visibility
Public Methods 103 (79.23%)
Non-Public Methods 27 (20.77%)
Functions 0
Named Functions 0 (0.00%)
Anonymous Functions 0 (0.00%)
Constants 0
Global Constants 0 (0.00%)
Class Constants 0 (0.00%)
You can require it globally or project-specific with Composer, or, my preferred method, just download the .phar
, run it, then delete it once you're done.
Here's what I used:
php phploc.phar --exclude vendor --exclude node_modules myproject
CLOC is one of the longer-running and smartest programs for counting lines of code. It can differentiate languages and also separate empty lines and comment lines against real lines of code.
It can also pull from archives and git repositories, diff two versions of a codebase, pull from specific commits, ignore files and folders matching specific patterns, and it's installable via Brew, NPM, two Windows package managers, and all the major Linux package managers.
Here's an example:
prompt> cloc gcc-5.2.0/gcc/c
16 text files.
15 unique files.
3 files ignored.
https://github.com/AlDanial/cloc v 1.65 T=0.23 s (57.1 files/s, 188914.0 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
C 10 4680 6621 30812
C/C++ Header 3 99 286 496
-------------------------------------------------------------------------------
SUM: 13 4779 6907 31308
-------------------------------------------------------------------------------
Because CLOC is language-agnostic, it's not going to provide the same quality or diversity of metrics as PHPLOC.
Here's what I used:
cloc --exclude-dir=vendor,node_modules myproject
I found a plugin for PHPStorm called Statistic that gives you the total number of lines of code across your whole project and broken down by file type.
I found this Gist, which harnesses the regex capabilities of Sublime Text search, and makes it easy to specify which file types and folders you want to include or exclude.
I used this version (from a comment, which ignores white space lines):
^.*\S+.*$
Make sure to exclude the right directories. Here's my list for a generic PHP project:
-./vendor/*,-./node_modules/*,-./.git/*,*.php
find
commandThis is definitely one of the less precise measures, but it also doesn't require you to have anything else installed, and it gives you the ability to include and exclude specific patterns for files and folders.
find . -type f ! -path './vendor/*' ! -path './node_modules/*' ! -path './.git/*' ! -name '*.log' -name '*.php' | xargs wc -l
As you can see, we're excluding the two vendor directories, the git directory, and then you can also see an example of how to exclude and include specific file patterns.
Thanks to Jake Bathman at Tighten for helping me get this command working correctly.
ag
)If you have Silver Searcher installed, you can try this:
ag -l --php --ignore-dir=vendor --ignore-dir=node_modules --ignore-dir=public --ignore-dir=storage | xargs wc -l
Thanks to Daniel Coulbourne at Tighten for this one.
I ran these tools all on the same project: Symposium, one of Tighten's open source projects, to see how they all compare.
wget https://phar.phpunit.de/phploc.phar
php phploc-4.0.1.phar --exclude vendor --exclude node_modules symposium/
Result:
brew install cloc
cloc --exclude-dir=vendor,node_modules symposium/
Results:
Jose Soto ran these for me.
Results:
Note: In order to get this plugin to give me good results, Jose had to delete the
vendor/
andnode_modules/
diprectories.
With regex enabled, find ^.*\S+.*$
in:
-./vendor/*,-./node_modules/*,-./.git/*,*.php
And then look at the results at the bottom.
Results:
find
commandcd symposium
find . -type f ! -path './vendor/*' ! -path './node_modules/*' ! -path './.git/*' ! -name '*.log' -name '*.php' | xargs wc -l
ag
)cd symposium
ag -l --php --ignore-dir=vendor --ignore-dir=node_modules | xargs wc -l
Results:
Wow. This post took way longer than I expected. Kudos to you for reading this long. Geez. I am tired.
In the end, I'd still recommend PHPLOC if you can. It is the most contextualized and provides additional details several others don't. It makes it easy to exclude vendor directories. It's good. That's all.
]]>Frequently, one or more of our developers will be tasked to work with the same client for months. Every day they wake up, open up Slack--which is the primary tool Tighten, as a remote company, uses to build culture and relationships--and switch to the client's Slack.
We've noticed that those folks whose client has their own Slack have less of a chance to participate in Tighten conversations and events. So, I set out to find a way to make it possible to have two local apps for Slack.
The best solution--which is not possible, as far as I can tell--is to have two versions of the official Slack running locally with a unique list of workspaces open in each. The app is great, it's standalone, and it has some niceties that aren't present using Slack in the browser.
But even if you try to force Slack to open multiple instances, it'll just collect them together. No luck there.
Obviously, the simplest option is to use the left panel switcher that the Slack app allows for:
However, when you're "in" one Slack workspace, all the rest can sort of disappear by the wayside. We want something that keeps our Slack more present.
Our devs could, of course, open Tighten's slack in their browser. But even with pinned tabs, browser windows still sort of ebb and flow; an individual item in a browser doesn't get its own cmd-tab; and the browser doesn't get quite the same quality of some of the keyboard shortcuts and other system integrations.
All-in-all, Slack in a browser window is fine, but a second-class citizen.
I didn't mention this in the original version of this post because I consider it helpful but separate, but enough people mentioned it that I figured I would add it. Recently Slack added a brilliant feature called shared channels that allows you to sync a channel between your Slack and another workspace.
If you can handle your communications with the other workspace within one or a few channels, and you have a relationship set up such that shared channels will work, that's absolutely the best way to go about it. You can avoid the slow-down of multiple workspaces but still get the benefits of collaboration.
The remaining options--and the less-desirable options above--assume you're in a context where that's not an option.
This tip is from Tightenite Dave Hicking:
You can duplicate the Slack application file (using Finder) on your Mac and rename the second version, and then you'll just have two instances that you can open side-by-side.
Pro: You get the full power of desktop Slack on both.
Con: If you have more than one workspace, you're now spinning up two instances of a local Slack instance with multiple workspaces. Slack uses up a lot of memory, and two full local Slack instances connected to multiple workspaces each will really amplify that. Also, every notification will be duplicated across all of your workspaces.
That leaves us with the other best option: single site browsers, or SSBs. An SSB is a desktop app that wraps a web site in its own process and often a simpler browser chrome. SSBs have dropped in popularity over the last few years, but they're still possible. The best tool for creating SSBs on Mac is called Fluid.
When you use Fluid, you point it at a specific web site, and it will generate an SSB for that web site. That means you use Fluid once to generate the SSB, which is a Desktop app that has its own icon and its own process. You then forget about Fluid, and take the generated SSB and place it anywhere on the desktop or the dock. You can now open or close it independently of your browsers, cmd-tab to it as its own entity, and it will generally act as its own completely independent application--even though it's just Webkit.
Pro: You can have a desktop app devoted to just the one Slack workspace you want to run separately from the rest, which means it consume less memory than a full duplicate of the desktop app.
Con: Because it's browser-based, instead of the true Electron Slack app, it's not quite as perfectly integrated with the desktop. For example, CMD-T in the desktop app is the same as CMD-K. But CMD-T in an SSB version opens a new tab in the SSB. Also, every notification will be duplicated in the one workspace you have open in your SSB (assuming you also have it open in your desktop Slack app.)
Step 1. Download the free Fluid app.
Step 2. Open the app.
Step 3. Enter your workspace's URL, and the title.
Step 4. Create it.
Step 5. Open your new Slack app--right next to your actual Slack app. Boom. Done.
Any other tips or tricks? Hit me up on Twitter.
]]>There are a few tasks that are still pretty tough with static sites—for example, search, and submitting forms (which we're trying to fix with FieldGoal). But there are other tasks that are tough-but-possible, and key among them are RSS and sitemaps.
Let's start with sitemaps. Our lead developer on Jigsaw, Keith Damiani, added a feature recently that allows you to add lifecycle hooks to your Jigsaw sites, and he even wrote up how to use those hooks to generate a sitemap. I wanted to try it out, so I did, and I extracted his instructions to this very simple post.
For such a powerful concept, I expected this to be a lot of work. Turns out it took less than fifteen minutes. Let's walk through it step by step:
sitemap
packageFirst, we're going to require on an external package for actually generating the sitemap. Let's pull in samdark/sitemap:
composer require samdark/sitemap
Next, we're going to take advantage of the hooks Keith introduced. The new hook system has three events you can listen for: beforeBuild
, afterCollections
, and afterBuild
. Hopefully I can get him to write them up in more detail in a blog post, but for now if you're interested in learning more you can take a look at the pull request.
We'll be using afterBuild
, which allows our system to have access to the full list of the output files when it goes to generate the sitemap. Since the listener we're building is a bit complex, we'll pull this functionality out to a class in a dedicated Listeners
directory.
GenerateSitemap
classLet's start by creating our listener. Make a new directory called Listeners
, and create a new file in it named GenerateSitemap.php
. Paste in the following:
<?php namespace App\Listeners;
use TightenCo\Jigsaw\Jigsaw;
use samdark\sitemap\Sitemap;
class GenerateSitemap
{
public function handle(Jigsaw $jigsaw)
{
$baseUrl = $jigsaw->getConfig('baseUrl');
$sitemap = new Sitemap($jigsaw->getDestinationPath() . '/sitemap.xml');
collect($jigsaw->getOutputPaths())->each(function ($path) use ($baseUrl, $sitemap) {
if (! $this->isAsset($path)) {
$sitemap->addItem($baseUrl . $path, time(), Sitemap::DAILY);
}
});
$sitemap->write();
}
public function isAsset($path)
{
return starts_with($path, '/assets');
}
}
Let's read through this file. First, we're using the $jigsaw
object to pull information out of Jigsaw, including our baseUrl
from the config.
Next, we're creating an instance of our Sitemap
dependency. Its constructor wants us to pass the path the file should be built to, so we're just putting it in build_{environment}/sitemap.xml
.
Next, we work through all the files that are being output (which we get from $jigsaw
's getOutputPaths()
) and adding every file to the sitemap unless it lives in the assets
directory.
Finally, we rely on the Sitemap
package to write the file. Done! ... almost.
Listeners
directoryLet's get this new directory into our actual app using PSR-4 autoloading. Modify your composer.json
to add a PSR-4 autoloader; it'll look something like this when you're done:
{
"require": {
"tightenco/jigsaw": "^1.2",
"samdark/sitemap": "^2.2"
},
"autoload": {
"psr-4": {
"App\\Listeners\\": "Listeners"
}
}
}
Now just run composer dump-autoload
on the command line, and it's loaded up.
GenerateSitemap
class in bootstrap.php
Finally, let's register the afterBuild
event listener in bootstrap.php
:
$events->afterBuild(App\Listeners\GenerateSitemap::class);
Now, as the last step of every Jigsaw build, our GenerateSitemap
class will be invoked and it'll generate our new sitemap.
That's it! On your next build, you'll see sitemap.xml
sitting in your build directory. Boom. Just that easy.
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.sitemaps.org/schemas/sitemap/0.9 http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd">
<url>
<loc>
https://mattstauffer.com/why-i-love-jigsaw
</loc>
<lastmod>2017-05-23T00:00:00+00:00</lastmod>
<changefreq>weekly</changefreq>
<priority>0.6</priority>
</url>
</urlset>
]]>I used to write Vue.js. In 2015 I did a series on Twitch/YouTube where I learned Vue "out loud", sharing my (often-painful) process learning Vue using the very minimal amount of material that was available at that point.
I've written some Vue since 2015, but I've also learned some React, written a lot of Laravel, run a company, and spent much of my free time writing a book about Laravel.
It's time for me to get back into Vue.js and really spend some time to get good at it. Thankfully, some of the best Vue developers out there work at Tighten, so I'm putting them to work to level me up.
So, I'm going to be writing new Vue code and also cleaning up some of my Vue from 2015, and I wanted to share the process with you, my lovely readers.
This is a little internal tool I'm building named PostIt. It helps me and the rest of the team remember to submit all of our new content (blog posts, etc.) to any content aggregators ("targets") that don't have APIs available.
I built the original proof-of-concept on a plane in about two hours, with no access to the Vue documentation or sample apps to look at, so it's the perfect opportunity for a refactor. I left a lot of @todo
s laying around the code and couldn't cheat and look anything up.
It's a lot. Here's basically what happens in the code below:
DashboardController
pulls in all the "targets" (right now, that's just Laravel News). It also pulls in all of the posts we've had recently, grouped by source (e.g. Tighten Blog, Mattstauffer.com).dashboard.blade.php
JSON-encodes those and passes them to Posts.vue
.Posts.vue
starts a table and iterates over the sources. For each it shows a header and then iterates over the posts for that source, passing each into Post.vue
.Post.vue
shows a line for each post, and then a checkbox for every target. That checkbox reflects whether or not there's already a "submission" record for that post--basically, whether or not that post should be checked off for that target.Note: this is a lot of code, and it was a bit hard to read, so I removed all the styles to make it a bit cleaner.
// For non-Laravel developers, this just passes a list of all the targets as $targets
// and a list of all the sources as $sources, each with a $posts relationship filled,
// and each $post with a $submissions relationship filled
return view('dashboard')
->with('targets', Target::all())
->with('sources', Source::with(['posts', 'posts.submissions'])->get());
<Posts :targets="{{ json_encode($targets) }}" :sources="{{ json_encode($sources) }}"/>
require('./bootstrap'); // Laravel bootstrap code
window.Vue = require('vue');
Vue.component('Post', require('./components/Post.vue'));
Vue.component('Posts', require('./components/Posts.vue'));
const app = new Vue({
el: '#app'
});
<template>
<table>
<tbody v-for="source in sources" source="source" targets="targets">
<tr>
<td>{{ source.name }}</td>
<td></td>
<th v-for="target in targets" class="pr-4 text-sm">
<a :href="target.url" target="_blank">{{ target.name }}</a>
</th>
</tr>
<Post v-for="post in limitPosts(source)" :targets="targets" :post="post" :key="post.id" />
</tbody>
</table>
</template>
<script>
export default {
props: [
'sources',
'targets'
],
methods: {
limitPosts(source) {
return _.slice(source.posts, 0, 5);
}
},
}
</script>
<template>
<tr>
<td>
<a :href="this.post.guid">{{ this.post.title }}</a>
</td>
<td>
{{ this.post.published_at }}
</td>
<td v-for="target in this.targets">
<input type="checkbox"
:checked="submittedToTarget(target)"
@click="toggleSubmission(target, submittedToTarget(target))"
/>
</td>
</tr>
</template>
<script>
import axios from 'axios';
export default {
props: [
'post',
'targets'
],
data() {
return {
submissions: []
};
},
mounted() {
this.submissions = _.map(this.post.submissions, (submission) => {
return submission.target_id;
});
},
methods: {
// @todo can this be a computed prop instead?
submittedToTarget(target) {
// @todo there's gotta be a cleaner way
return _.filter(this.submissions, (submission_target_id) => {
return submission_target_id == target.id;
}).length > 0;
},
toggleSubmission(target, is_submitted) {
const data = {
'target_id': target.id,
'post_id': this.post.id
};
// @todo cleaner way to make this not a conditional?
// @todo There seems to be the need for a Vue-reactive way to modify this array?
if (is_submitted) {
_.remove(this.submissions, target.id);
axios.delete('/api/submissions', { params: data });
} else {
this.submissions.push(target.id);
axios.post('/api/submissions', data);
}
}
},
}
</script>
I knew some of the filter and map operations I wrote were wrong; the original version of this had a much more complex data model, and as I simplified it I knew I could find simpler collection operations. You can also see some @todo
s in there where I knew what I needed to do but not how.
When I got off the plane, I pushed up this code and then disappeared to take care of my family.
Two Tighten developers shared refactors for me.
First, Daniel suggested a few high-level syntax changes. He pointed out I could change this:
// @todo there's gotta be a cleaner way
return _.filter(this.submissions, (submission_target_id) => {
return submission_target_id == target.id;
}).length > 0;
to this:
return !! this.submissions.find(targetId => targetId == target.id);
Like I mentioned before, this was one of those places where I had simplified the data model, so I was already aware it needed to be better. But man, if there's ever been a good case for the beauty of fat arrow functions and collection methods...
He also pointed out that this:
this.submissions = _.map(this.post.submissions, (submission) => {
return submission.target_id;
});
could be this:
this.submissions = this.post.submissions.map(submission => submission.target_id);
Beautiful.
Keith, one of our senior developers who is also fully responsible for building the beautiful and jealousy-inducing Tighten Typing Challenge, gave me a full refactor. I'll show his code, and then point out a few big changes he made.
<Dashboard :targets='@json($targets)' :sources='@json($sources)'/>
require('./bootstrap');
window.Vue = require('vue');
window.axios = require('axios');
Vue.config.productionTip = false;
import Dashboard from './components/Dashboard.vue';
const app = new Vue({
components: {
Dashboard,
},
}).$mount('#app');
<template>
<table>
<PostList v-for="source in sources" :source="source" :targets="targets" :key="source.id"/>
</table>
</template>
<script>
import PostList from './PostList.vue';
export default {
components: {
PostList,
},
props: {
sources: {},
targets: {},
},
}
</script>
<template>
<tbody>
<tr>
<td>{{ source.name }}</td>
<td></td>
<th v-for="target in targets">
<a :href="target.url">{{ target.name }}</a>
</th>
</tr>
<PostItem v-for="post in recent_posts" :targets="targets" :post="post" :key="post.id" />
</tbody>
</template>
<script>
import PostItem from './PostItem.vue';
export default {
components: {
PostItem,
},
props: {
source: {},
targets: {},
},
computed: {
recent_posts() {
return this.source.posts.slice(0, 5);
}
},
}
</script>
<template>
<tr>
<td>
<a :href="post.guid">{{ this.post.title }}</a>
</td>
<td>
{{ this.post.published_at }}
</td>
<td v-for="target in targets">
<PostItemSubmission
:submission="getSubmissionForTarget(target)"
:post_id="post.id"
:target_id="target.id"
/>
</td>
</tr>
</template>
<script>
import PostItemSubmission from './PostItemSubmission.vue';
export default {
components: {
PostItemSubmission,
},
props: {
post: {},
targets: {},
},
methods: {
getSubmissionForTarget(target) {
return this.post.submissions.find((submission) => submission.target_id == target.id )
},
},
}
</script>
<template>
<input type="checkbox" v-model="has_submission"/>
</template>
<script>
export default {
props: {
submission: {},
post_id: {},
target_id: {},
},
data() {
return {
has_submission: Boolean(this.submission),
url: '/api/submissions',
}
},
computed: {
payload() {
return {
target_id: this.target_id,
post_id: this.post_id
}
},
},
watch: {
has_submission(val) {
if (val) {
axios.post(this.url, this.payload);
} else {
axios.delete(this.url, { params: this.payload });
}
},
},
}
</script>
Let's look at each of Keith's refactors one-by-one.
First, Keith renamed Posts
to PostList
because it's both clearer and it also follows the Vue naming convention of two-or-more-word component names.
He then pulled out a wrapper for the Post list named Dashboard
(which doesn't actually meet that naming convention, but, hell, it's the right name).
He renamed Post
to PostItem
for the same reasons.
As you can see from my comments inline, the main changes here are to take advantage of ES6 imports and to update the construction of the core Vue instance to be more parallel in shape to the components we use elsewhere. The biggest wins here are consistency and therefore predictability and ease of onboarding new devs.
// Store axios on the window (global) since we'll use it all the time
window.axios = require('axios');
// Disable the Vue console log about building to production
Vue.config.productionTip = false;
// Use ES6 imports instead of `require`
import Dashboard from './components/Dashboard.vue';
const app = new Vue({
// Register components when constructing, which is more parallel
// to how we register components in other components
components: {
Dashboard,
},
// Mount to #app after the fact, making the core Vue registration shape
// more parallel to how we register other components
}).$mount('#app');
In my original Posts.vue
I looped over Post.vue
using a method limitPosts
, which grabs only the five most recents posts from the given source.
<Post v-for="post in limitPosts(source)" :targets="targets" :post="post" :key="post.id" />
What I should've considered is that since the Posts.vue
component only has a single source
, that's a perfect fit for a computed property--which is better than a method because its results get cached and only re-computed when its dependencies change. So, we moved from this:
export default {
// ...
methods: {
limitPosts(source) {
return _.slice(source.posts, 0, 5);
}
},
}
to this:
export default {
// ...
computed: {
recent_posts() {
return this.source.posts.slice(0, 5);
}
},
}
Which you can use like this:
<PostItem v-for="post in recent_posts" :targets="targets" :post="post" :key="post.id" />
You can also notice that he took advantage of ES6's collection methods--I'm an old head who still reaches for lodash for everything.
I unthinkingly registered my child component (Post
) in the bootstrap:
require('./bootstrap'); // Laravel bootstrap code
window.Vue = require('vue');
Vue.component('Post', require('./components/Post.vue'));
Vue.component('Posts', require('./components/Posts.vue'));
const app = new Vue({
el: '#app'
});
But, of course, that's not neceessary, because Post
was never going to be used in my HTML, only within a child component. Keith fixed that by importing each component within the component that needs it:
<script>
// Inside of Dashboard.vue, importing and registering the PostList.vue component
import PostList from './PostList.vue';
export default {
components: {
PostList,
},
}
</script>
One of the clumsiest parts of my solution was how I was trying to handle the representation of the checkboxes (targets) in the Post
component. I knew something was wrong; I sprinkled @todos
about how I wanted to handle them. I awkwardly updated the data model to simplify how I was representing their state ("submission"):
mounted() {
this.submissions = _.map(this.post.submissions, (submission) => {
return submission.target_id;
});
},
But in the end, it just didn't feel right. I was having to create methods for toggling and checking state and everything felt way too dirty.
Turns out the answer was to move all of that onto a specific component for each checkbox. I had considered it but felt like it was going to be overkill, so I'm so glad Keith did it--it made all of the logic so much cleaner. That means PostItem.vue
has almost no logic--just a single method to get the right submission for the target you're passing to the checkbox:
<template>
<tr>
<!-- // The rest of the row -->
<td v-for="target in targets">
<PostItemSubmission
:submission="getSubmissionForTarget(target)"
:post_id="post.id"
:target_id="target.id"
/>
</td>
</tr>
</template>
<script>
import PostItemSubmission from './PostItemSubmission.vue';
export default {
// ...
methods: {
getSubmissionForTarget(target) {
return this.post.submissions.find((submission) => submission.target_id == target.id )
},
},
}
</script>
Now we have a new component: PostItemSubmission
. Again, this is a checkbox that shows the target (Laravel News) for this source (Tighten.co Blog) and shows whether it's checked yet (whether or not a submission exists yet). Its logic is a lot simpler; has_submission
initializes its state by just checking whether a given Submission exists yet, but it's also bound to the checkbox via v-model
and Keith is watching that value and when it's updated by the user, it's triggering axios calls to the server:
<template>
<input type="checkbox"
v-model="has_submission"
class="my-2"
style="transform: scale(1.25)"
/>
</template>
<script>
export default {
// ...
data() {
return {
has_submission: Boolean(this.submission),
url: '/api/submissions',
}
},
watch: {
has_submission(val) {
if (val) {
axios.post(this.url, this.payload);
} else {
axios.delete(this.url, { params: this.payload });
}
},
},
}
</script>
Thanks for checking this out. Hopefully this won't be the last time I write some crappy Vue and get smarter folks to fix it up for me, publicly. Is there anything else you'd change? Hit me up on Twitter.
]]>“I’m so glad we studied parallelograms in school instead of wasting our time on stuff like how to do our taxes. It really comes in handy every year during parallelogram season.”
- funny people on Twitter
Imagine, really, though: what would it look like if you replaced Advanced Calc or Trigonometry or Economics with “Basic Budgeting and Taxes”? Maybe “Retirement and Investing 102”. “Why Payday Loans are the Devil 103.”
How much different would our lives be if we all understood how to manage our finances? In this post I’m going to try to give you my version of Basic Personal Finances 101.
Want to skip straight to the recommendations? Scroll down to “Matt’s Magical Finance Plan”.
Let's start here: I'm a computer programmer. I studied English in school. I am also a musician. I’m not an accountant, a tax professional, a mathematician, or even a Mr. Money Mustache or someone else famous. I haven't even cared much about investing or retirement until recently.
However, I am someone who likes finding smart people and distilling their wisdom down. That’s my hope here.
I’ve seen so many people in my generation (I'm technically an old millennial) spend years completely unaware of the basic first steps for handling your current and future financial health. These friends may hear terms like “financial independence”, “index fund”, or “compound interest” thrown around, but that all seems to be for people who have the time—and interest—to spend all their free time learning about this stuff.
My hope is that I can give the guide I wish I had read when I was 25—just enough to get you on the right path, but not so much you’re going to be bored or overwhelmed. This won’t apply for everyone, but it will for most people. You may know enough that you disagree with one point or another; great! This post isn’t for you. My goal is to give you a good foundation and then for you to either coast on that foundation, or find wiser and better teachers than me.
My primary goal is for you to be happy and healthy and financially sound.
If you spend all your free time on Reddit subs about personal finances. If you've already read several books about personal finances and already have lots of opinions of your own. If your finances are already in order. If you don't live in the U.S.
If your finances overwhelm you. If your debt overwhelms you. If you have so much crap to take care of in your life that the last thing you want is to spend a bunch of time learning about money. If you don’t have your finances perfectly handled and managed. If you know you’re supposed to save but you’re just trying to keep afloat. If you know you’re supposed to invest but it’s so overwhelming that you just let your savings account keep building up. If you can’t figure out whether to pay off debt or put money into retirement or put money into savings or whatever else. If you have never thought about your money before.
Ready? Let’s go.
Before I give specific strategies, here are a few really key concepts I want you to remember. I know these are brief, but the books I’ve linked in the footnotes will explain all of them if you want to dig deeper. If any of these are confusing, just skip them for now and return when you have a better handle on the context surrounding them.
OK. Let’s move on to the actual plan.
Remember, this is just one path. It’s based on asking a few wise friends “What’s your magical finance plan?” and then reading a few books and living my life experience, but I’ve really just taken other people’s wisdom and packaged it in a way that makes sense to me. If something here doesn’t make sense, adjust. If you’ve read a lot on finances, you’ll probably have a million reasons why you disagree. But here’s the basic steps you should take if you’ve done nothing about your finances until this point.
I’m assuming here that you’ll have a bit of money available every pay check, once you start spending less than you earn (see point #1), that you can use to make wise financial decisions. So, in following this plan, you’ll basically use that money to accomplish the first item, and then the next, and then the next.
Read that again: You don’t start working on item 3 until you’ve accomplished item 2, and so on.
Note: All the “see more” books will be linked at the bottom of the page. They’re all affiliate links, but they’re also great books; if me using an affiliate link feels weird to you, just go look up the book name on Amazon and bypass my affiliate link.
Figure out how much you earn. Figure out how much you spend on average every month, and also as many of your non-monthly expenses as you can. Find some way—budget or whatever else—to spend less than you’re making. For this step, pay just the minimum payments on all your debt but stop accruing more debt. No more credit cards. Freeze ‘em. No payday loans. Just get your monthly expenses lower than your monthly income.
This is the most difficult step in this entire post. It’s harder the less money you make, but it’s hard for everyone. Essentially, you need to cut “discretionary spending” (eating out, new clothes for fashion’s sake, the latest greatest video games, expensive alcohol and drug habits, cable TV) until your expenses are below your income; if that doesn’t work, you may need to take some steps to lower your fixed expenses (get a cheaper apartment, share an apartment, learn to cook at home, eat rice and beans for a few meals a week, switch to a cheaper phone provider) in order to get there. For some people this step is not possible until they can get a better job, but it’s a lot less likely that this is the case for you than you think.
See more: You Need a Budget; The Millionaire Next Door
Save $1000 in an immediately-accessible savings account (one attached to your checking account, at the same bank).
If your employer offers a 401k with “match”, that means they’re essentially giving you free money to incentivize you to put your own money into a 401k, which is a “tax-advantaged” retirement account. DO IT. Talk to your benefits administrator and get your 401k contribution set up so you are giving up to the cap for the “match”. Don’t contribute any more than the “match” percentage at this point.
What’s a tax-advantaged account? It basically an account that puts your money in a place where the government can’t take as much of it.
See more: How 401(k) Matching Works | Investopedia
Other than student loan and mortgage debt, pay off all of your high interest loans (with an interest rate of 10% or more; this includes most credit cards) one at a time. Pay the minimum on all your loans except one; pay that one off as aggressively as your budget can afford. Once you pay off that one, take the money you were using to pay it and roll it into the next loan.
To decide which to pay off first: either pay on the smallest loan first (easiest, most satisfying; called “snowball method”) or pay on the highest interest rate first (best financial sense, but maybe less satisfying; called “avalanche method”).
See more: Debt snowball vs debt avalanche
Upgrade your emergency fund to cover three to six months of your minimum living expenses. Imagine you lost your job and had to search for one for a while. What would it cost for you to get by? You can consider canceling Netflix and dropping down your phone’s data plan and whatever other changes you would actually make during this time, but make sure you can really get by with the amount you put aside.
If you have any debt remaining (other than student loan or mortgage debt), hit this debt with the debt snowball as described in #4.
The annual maximum contribution you can make to a a 401(k) is $19,500, so max that out before anything else.
You (and your partner, if you’re married) can put up to $5500$6000 into an IRA every year. This, like a 401k, is a tax-advantaged (meaning, you get to save your money and give as little to the government as possible) savings account. You can set one up plenty of places, but I use and love Betterment (referral link, but I'd recommend it whether or not you used my referral).
At this point you’re doing great. You’re out of debt, you have a solid start to your retirement, and you have a three-to-six month emergency fund. The next step depends a lot on your life and goals.
Got kids? Consider setting up education savings accounts ("529 plans") for them. House or car purchase coming up? Set up a goal savings account with a good mix of stocks and bonds, or, if you’d rather be a bit safer, an Ally bank savings account. Focusing on getting really set in retirement, or just unsure of what to do? Invest the rest of the money in Vanguard's (directly, VTSAX, or using a tool like Betterment or Wealthfront).
See more: Simple Path to Wealth
¯\(°_o)/¯
First thing I do is google jlcollinsnh _subjecthere_
. He’s the guy who wrote The Simple Path to Wealth and if he’s written about the issue, I trust him.To learn more about these topics, here are a few places to turn:
I asked for recommendations to add to this post and got a ton of links. I can’t recommend these because I haven’t read them. But if you’re digging for more places to learn, these have all come recommended from friends.
Thanks to Caleb Porzio, Sara Bine, JL Collins, Berry Long, and my dad for teaching me various things about finances that have gotten me to this place. Also thanks to all my Twitter friends who responded to my request for advice, and everyone at Tighten for sharing their advice and stories.
I’ve made a lot of assumptions in this post.
There are a million reasons this might not apply to you or might not work to you. I’m pretty confident there’s an annual income range at which some of the steps here might need to shift around some. I’m absolutely confident that step one (start spending less than you earn) is both the most globally true and also harder the less money you make. Your mileage may vary. Etc.
Did you just read this whole thing and you’re super overwhelmed? Here’s the simple starter kit version:
Everything else can come after that.
]]>For a long time, IoT (Internet of Things) has been something that hasn't interested me at all.
For starters, I always thought of it as "controlling your house with Alexa", and my family has a "no always-on microphones" rule in the house, so no Echo, no Google Home, no nothing. (Yes, we have smartphones, but we even work—and are trying to work even more—to limit their ubiquity in the house). It was only recently that I realized there are many other ways to control smart devices other than just voice-activation.
But more importantly, I just didn't get the appeal. Why is this actually valuable in my life?
It took visiting my dad for Thanksgiving and seeing that he received practical value from his IoT devices—for example, turning on the exterior lights of his house when he drives up actually makes it easier to navigate the driveway, or pressing a single button at bed time to trigger a series of events saves him from manually walking around the house every night flipping switches.
As a programmer, of course, I instantly saw how I could use IoT devices to allow my applications to interact with the real world. I'm totally fascinated by this idea; I've long wanted a button I could press that would make things happen on the Internet. (Yes, I own several Amazon Dash buttons, but they are pretty steep learning curve and not particularly easy to get). I just hadn't made the connection between this dream and the new "IoT" fad.
So, at a friend's recommendation, I got my first IoT devices: LIFX lights.
There are many different smart lighting companies. Sylvania, Phillips Hue, LIFX; the list goes on and on.
Hue requires you to have a Hue hub, so it's hard to just get started with. Sylvania has a hub, but also can connect directly to SmartThings (a more centralized, shared system for IoT); great bulbs, but not good for one-offs like this.
With LIFX, each bulb functions independently; that means you can just buy one and only one, and also each individual bulb can work with multiple systems like SmartThings and Apple's HomeKit. The bulbs are pricier as a result, so if you're going to do your whole house with them, I'd consider a SmartThings hub and some Sylvania bulbs. But if you just want to test the waters, LIFX is a solid place to start. (If price is a big concern, I hope to write up a similar article soon about the currently-$30 ThingM blink(1))
Another win for LIFX: There's a direct IFTTT integration to your LIFX account. This is the easiest possible connection between an app and an IoT device you can possibly make.
If you're just getting started, your cheapest multi-color option is the currently-$45 "LIFX Mini Color". I bought mine all during their incredibly discounted holiday season, so if all else fails you can wait until Black Friday to snatch some up.
Let's start simple and look at how to control a LIFX light from your Laravel application.
Let's do this. We're going to enable your web application, with just a few lines of code, to control your LIFX lights, right in front of you, right now.
If you're an IFTTT pro, skip to the next section.
IFTTT is a web-based system that lets you plug triggers ("this") to actions ("that"). Some of the most common examples are: If "I add a new photo to my Instagram" (this) then "save that photo to my Dropbox 'photos' folder" (that).
Each "Applet", which is a description of a "this" and a "that", runs independently from the others. And each "this" and "that" is connected to a "service", which you have to authenticate into your IFTTT account. Facebook; Alexa; Tumblr; Pinboard; Google Drive; Dropbox; etc.
Let's get it going. First, visit https://ifttt.com/lifx and authorize your IFTTT account with your LIFX account.
Next, create a new IFTTT Applet and begin to set up your "this" trigger with the type "Webhooks". IFTTT calls this the "Maker webhooks" type, and you'll have to also give it permission to hook into your IFTTT account.
You'll want to go into the Maker webhooks landing page and visit the documentation page from there, where you'll get a sample webhook URL you can ping from your app. You only get one URL, but you can change its {event}
segment per intended use.
The sample URL you get should look something like this:
https://maker.ifttt.com/trigger/{eventName}/with/key/{yourKey}
For this example, we're going to be triggering the "new_episode" event, which I'll trigger from my app every time there's a new episode of my podcast.
Let's go back to that new Applet you were creating. Now that you have Maker webhooks enabled, you can set it so your "this" is "Receive a web request with Event Name of 'new_episode'". Done. It's now just listening for this URL:
https://maker.ifttt.com/trigger/new_episode/with/key/{yourKey}
Now, let's set the "that". Pick LIFX, and authorize your LIFX account with your IFTTT account.
There are a few actions you can choose to take with your LIFX lights, and you can dig into all of them. I picked "Blink lights", and I set the lights on the front of my house to blink blue 3 times at a bright setting. Why not.
Want to test it to make sure it works? Just ping it with Curl on your command line. Remember, {yourKey}
can be gotten by visiting the "Maker webhooks" service section of IFTTT and visiting "Documentation".
Run this command:
curl -X POST https://maker.ifttt.com/trigger/new_episode/with/key/{yourKey}
and you should, after a second or three, see your lights blink. Almost done!
Finally, you just have to ping a webhook from your Laravel (or other PHP) app.
There are three main ways to send a POST HTTP request in PHP: Curl, file_get_contents
, or Guzzle. There are a million examples if you just google "PHP send POST", so I'll just give you quick examples for Guzzle (if you already have it on your project, which you will if it's Laravel) or file_get_contents
if you don't.
Let's assume you're using Laravel. You'll want to move your key out of your code in case you ever share it with anyone else, which you can do by adding a section to the config/services.php
file:
// config/services.php
// ...
'ifttt' => [
'webhook_key' => env('IFTTT_WEBHOOK_KEY'),
],
And then add that environment variable to .env
(with the real key) and .env.example
(with an empty key, as an example):
# In .env
IFTTT_WEBHOOK_KEY=myKeyFromTheIftttDocumentationPage
# In .env.example
IFTTT_WEBHOOK_KEY=
Finally, you can make the call:
$eventName = 'new_episode';
(new \GuzzleHttp\Client)->post(
'https://maker.ifttt.com/trigger/' . $eventName . '/with/key/' . config('services.ifttt.webhook_key')
);
Boom. Up and running. Throw that bad boy into a cron job, an event listener, a controller method, or whatever else, and any action or trigger in your app can now make changes to your LIFX lights.
file_get_contents
Don't have Guzzle? You can use Curl or file_get_contents
; just because Curl is a dependency that MAYYY not be around on some servers, I'll show you file_get_contents
.
file_get_contents(
'https://maker.ifttt.com/trigger/{eventName}/with/key/{key}',
false,
stream_context_create([
'http' => [
'header' => "Content-type: application/x-www-form-urlencoded\r\n",
'method' => 'POST',
]
])
);
IFTTT doesn't have their CORS set up to allow webhook pings from JavaScript—probably wisely—so you can't do this from frontend JavaScript.
What if you want to pass the color, for example, along with the webhook? Just pass it as a parameter:
(new \GuzzleHttp\Client)->post(
'https://maker.ifttt.com/trigger/{eventName}/with/key/{key}',
[
'form_params' => ['value1' => 'red'],
]
);
// or...
file_get_contents(
'https://maker.ifttt.com/trigger/{eventName}/with/key/{key}',
false,
stream_context_create([
'http' => [
'header' => "Content-type: application/x-www-form-urlencoded\r\n",
'method' => 'POST',
'content' => http_build_query(['value1' => 'red'])
]
])
);
For some reason, it only gives you a limited option of possible keys you can pass: value1, value2, or value3. I'm not sure why, but this may be a built-in restriction of the "Maker webhook" type; please let me know on Twitter if you know.
Then pull it in the IFTTT custom settings; as you can see here, I'm setting "color" (a LIFX property, which I discovered by looking at the explainer text under Advanced Options) to be equal to the value of the "value1" form input. (For some reason, IFTTT capitalizes Value1
, but it still works the same.)
This is just one little thing done. One little bit of LIFX, a bit of IFTTT (I'm sure you've realized that you can use this to trigger all sorts of different events with your apps and IFTTT), and only a single-direction call (app -> real world device).
Next time we'll talk about receiving input from an IoT device on your apps. Stay tuned, dear listeners (readers)!
]]>Already know how facades work? Skip to what’s new.
If you’re not familiar with facades in Laravel, they’re shortcut classes that provide static access to non-static methods on service classes bound to Laravel’s container. Phew, that’s a mouthful; let’s take a look at some real code.
For instance, if I want get something from the session, here’s one way to do it in Laravel:
<?php
namespace App\Http\Controllers;
use Illuminate\Session\SessionManager;
class ThingController extends Controller
{
protected $session;
public function __construct(SessionManager $session)
{
$this->session = $session;
}
public function doThing()
{
$importantValue = $this->session->get('important');
}
}
… or in a view:
Your user ID is: {{ app('Illuminate\Session\SessionManager')->get('important') }}
However, this means you have to inject a session instance anywhere you’re going to use it. This isn’t a big issue, but especially in views and sometimes controllers (especially before controllers were namespaced), this hasn’t always the most convenient. The app()
helper also makes it easier, as you can see in the view example. But facades make that even easier:
<?php
namespace App\Http\Controllers;
use Illuminate\Support\Facades\Session;
public function ThingController extends Controller
{
public function doThing()
{
$importantValue = Session::get('important');
}
}
… or in a view:
Your user ID is: {{ Session::get('important') }}
The facade works like this:
class Session extends Facade
{
protected static function getFacadeAccessor()
{
return 'session';
}
}
This is telling the container this: “When I use a static method on the Session
facade, call it on an instance of app('session')
”. The facade functionality pulls an instance out of the container and calls the method directly on that.
Good? Good. Let’s cover what a real-time facade is.
Real-time facades let you create your own facades on the fly. Instead of having to create a facade class like the Session
facade class I referenced above, you can use a class as its own facade by adding Facades\
to the beginning of its own namespace.
Let’s say I have a class called Charts that has a burndown()
method:
<?php
namespace App;
class Charts
{
protected $dep;
public function __construct(SomeDependency $dep)
{
$this->dep = $dep;
}
public function burndown()
{
return 'stuff here' . $this->dep->stuff();
}
}
There’s nothing special about this class. Here’s how we would normally use it in a view:
<h2>Burndown</h2>
{{ app(App\Charts::class)->burndown() }}
Now, let’s make it a facade, just by changing the namespace:
<h2>Burndown</h2>
{{ Facades\App\Charts::burndown() }}
Or, in a class, from this:
<?php
namespace App\Stuff;
use App\Charts;
class ThingDoer
{
private $charts;
public function __construct(Charts $charts)
{
$this->charts = $charts;
}
public function doThing()
{
$this->charts->burndown();
}
}
to this:
<?php
namespace App\Stuff;
use Facades\App\Charts;
class ThingDoer
{
public function doThing()
{
Charts::burndown();
}
}
That’s all. Just a quick and simple way to create a facade on the fly. One more tool in your terseness arsenal.
You might be asking yourself, "why all the fuss for something so simple?" In terms of its terseness, it definitely has a lot of value in some contexts and negligible impact in others. What if I told you, though, that you could use real-time façades to make your code more testable?
Taylor wrote a great post explaining how he uses real-time facades in his Forge code, and how it's now more testable as a result.
]]>In Laravel 5.4, collections got a few boosts. Let’s take a look at a few.
The biggest-name change is “higher order messaging”, an object-oriented design pattern described first in 2005 (“Higher Order Messaging”) and later implemented in Ruby (Mistaeks I Hav Made: Higher Order Messaging in Ruby) by Nat Pryce, co-author of GOOS.
The best way to understand Higher Order Messaging is to walk through an example. I’m going to take the idea and shape of the code directly from Nat Pryce’s article, but adapt them for PHP and make them a little easier to follow. Thanks to Nat for his original writing.
We have a collection of Claimant
s who are receiving benefits from the government. A claimant has a name
, a gender
, an age
, and an integer benefits
that represents the total of their weekly monetary benefits.
Let’s say we want to add $50 a week to the benefits
total for every claimant who is retired. We’re using the receiveBenefits()
method to increase the benefits
value.
First, we can iterate over it procedurally;
foreach ($claimants as $claimant) {
if ($claimant->is_retired) {
$claimant->receiveBenefit(50);
}
}
The revolution that hit the Laravel world over the last year introduces the idea of higher-order functions, or functions which take a Closure, and how they can be used in collection pipelines. Here’s the same call using a collection pipeline:
collect($claimants)->filter(function ($claimant) {
return $claimant->is_retired;
})->each(function ($claimant) {
$claimant->receiveBenefit(50);
});
Great. If you’ve read Adam’s book or watched his course this isn’t news. But let’s take a look at the next step—not higher-order functions, but what is called higher-order messages:
collect($claimants)->filter->is_retired->each->receiveBenefit(50);
As Nat defines them:
A higher order message is a message that takes another message as an "argument". It defines how that message is forwarded on to one or more objects and how the responses are collated and returned to the sender. They fit well with collections; a single higher order message can perform a query or update of all the objects in a collection.
If all this talk of messages seems foreign, it would be worth reading up a bit on the idea of OOP as “message passing”. In short, when Nat is talking about messages here he’s (sort of) referring to method calls as “messages” which you assemble together in a sort of language — “claimants (filter is retired) each receive benefits” isn’t a perfect English sentence, but it’s definitely a series of messages sent to the claimants collection, not a bunch of implementation details.
I think Nat’s post does the best job of explaining the benefit we’re getting by converting this code sample to use higher order messaging:
[T]he code using higher order messages most succinctly expresses the business rule being executed. It expresses what is being performed and hides the details of how.
You can already get a taste of how it works from my examples above. Essentially, instead of calling a collection method like filter()
and giving it a Closure that returns the property from each object, you call each method (message) one after another and the Higher Order Messaging collection pipeline reads your intent and makes it work.
It’s a little hard to describe perfectly—how are both "filter" and "is_retired" messages? Essentially, when you call collection methods like filter
using their higher order messaging syntax ($collection->filter
instead of $collection->filter(...)
) they’re now set to expect the next string in the call stack to be a “message” passed to them. If I’m filter
being called in a higher order messaging context, I expect the next string down the call stack to be a property or method that I’ll call on each item for my filter
truthiness test.
class Person
{
public $isAdmin;
public function __construct($isAdmin)
{
$this->isAdmin = $isAdmin;
}
public function isAdmin()
{
return (bool) $this->isAdmin;
}
}
$people = collect([new Person(false), new Person(true)]);
// Filter against a prop
$people->filter->isAdmin;
// ... same as:
$people->filter(function ($person) {
return $person->isAdmin;
});
// Filter against a method
$people->filter->isAdmin();
// ... same as:
$people->filter(function ($person) {
return $person->isAdmin();
});
So, practically, higher order messaging in Laravel collections simplifies a few extremely common syntaxes for passing properties or methods into collection methods like filter
and each
. These changes make the code simpler and more expressive and everyone wins.
If you’re not familiar with the already-existing pipe()
method in Laravel’s collections, here’s how it works: the pipe()
method’s Closure is passed the entire current collection as a parameter, and whatever you return from that Closure will replace the collection.
return collect($peopleArray)
->sort('age')
->pipe(function ($people) {
// Final collection is run through the transformer
// and then the output of that is returned
return app('peopleTransformer')->transform($people);
});
The new when()
method is the same, except it’s conditional. To understand the when()
method, just take the pipe()
method and (in your head) modify it to it accept a first parameter; if that parameter is truthy, run the second parameter Closure as a pipe()
method. If it’s falsey, ignore the entire when()
call and keep moving. That’s the when()
method.
return collect($peopleArray)
->sort('age')
->when(request()->wantsJson(), function ($people) {
// Final collection is run through the transformer
// and then the output of that is returned
return app('peopleTransformer')->transform($people);
});
That’s it! More ways collection pipelines can make your code cleaner, terser, more expressive, and more elegant.
]]>tap()
, dozens of times in his videos. But what does it actually do?
tap()
was introduced in Laravel 5.3 and got some power boosts in 5.4. Let’s take a look at what it is and what it does.
tap()
function tap($value, $callback)
{
$callback($value);
return $value;
}
That’s what the tap()
function originally looked like—it's gotten a few small improvements since then, but let's start at the basics.
It’s really simple, right? So why is everyone so excited?
Let’s look at what it does: Take a value, pass that value into some form of callback, and then return the value.
tap()
Here’s what it looks like to refactor something to use tap()
.
First, let’s take a common pattern:
public function generateUseAndReturnThing($input)
{
$thing = $this->thingFromInput($input);
$this->doActionToThing($thing);
return $thing;
}
We take some input, make a thing from it, perform some actions with that thing, and then finally return the thing that we generated.
So, remember, tap()
takes a value, acts on it, and then returns it. That means we can identify a potential place to use tap()
here; we’re acting on a value ($thing
) and then returning that same value.
That’s our key to watch out for. Consider that a code smell that hints at a potential use for tap()
usage: using temporary variables just for the sake of returning later.
So, let’s make this use tap()
instead.
public function generateUseAndReturnThing($input)
{
return tap($this->thingFromInput($input), function ($thing) {
$this->doActionToThing($thing);
});
}
Let’s walk through what we’ve changed here.
First, we are no longer assigning the output of $this->thingFromInput($input)
to a temporary variable; we’re instead passing it directly as the “value” to tap()
.
Second, we know the “value” (that is, the first parameter you send to tap()
) is provided as a parameter to our callback function, so we pass in a closure that takes the “value” $thing
as its parameter.
Third, we operate on our value.
And fourth, we close out our tap function. The final return of the tap()
function is the originally-passed value (the output of $this->thingFromInput($input)
) so this is what is returned from our generateUseAndReturnThing()
method.
How much did that help? What did it bring us?
Before we ask those questions, let’s read a few words from a recent post by Taylor Otwell about Tap (Tap, Tap, Tap). Remember also that Taylor and Adam Wathan are the ones who originally popularized tap()
—think of their influences and values.
On first glance, this Ruby inspired function is pretty odd. … I find it often lets me write terse, one-line operations that would normally require temporary variables or additional lines.
This isn’t allowing us to do things we couldn’t do before.
Instead, it’s helping us write terser code, with less temporary variables and less lines of code.
In Taylor’s article he gave an example of how he’s using Tap in the Laravel core.
public function create(array $attributes = [])
{
return tap($this->newModelInstance($attributes), function ($instance) {
$instance->save();
});
}
This is the create()
method on Eloquent models. To understand its value, let’s take a look at what this same method call would look like without tap()
:
public function create(array $attributes = [])
{
$instance = $this->newModelInstance($attribtues);
$instance->save();
return $instance;
}
We’re dropping the need for the $instance
temporary variable and wrapping the instantiation, save()
call, and the return of the instance itself up into the core workflow of the tap()
function.
Here’s another really practical use case. One of the most common workflows for writing middleware in Laravel looks a bit like this:
public function handle($request, Closure $next)
{
$response = $next($request);
$this->decorateResponseSomehow($response);
return $response;
}
We can now refactor this using tap()
:
public function handle($request, Closure $next)
{
return tap($next($request), function ($response) {
$this->decorateResponseSomehow($response);
});
}
There’s another interesting use case that I came across in Derek MacDonald’s article on Tap; Derek found that the AuthenticateSession
middleware now uses tap()
, but not quite the way I described it above.
public function handle($request, Closure $next)
{
return tap($next($request), function () use ($request) {
$this->storePasswordHashInSession($request);
});
}
Notice anything different? Here’s what that call would look like before tap()
:
public function handle($request, Closure $next)
{
$response = $next($request);
$this->storePasswordHashInSession($request);
return $response;
}
Unlike in my example, the Closure isn’t actually performing any operations working with the “value” (the result of $next($request)
). Instead, this method uses tap
because it still saves us from needing to save $response
as a temporary variable before storing the hash.
In Laravel 5.4, the tap()
function got a new use case. Most of the examples I’ve shown up until this point have been calling an outside method on our value:
return tap($thing, function ($thing) {
$this->doSomethingToThing($thing);
});
But there are other times where you want to call a method on the object itself; in these cases, the benefit of using tap()
is just to return the object when the method we're calling on the object would normally return something else—for example, Eloquent models' update()
method returns a boolean.
That converts this call:
$user->update([
'name' => $name,
'age' => $age,
]);
return $user;
into this:
return tap($user, function ($user) {
$user->update([
'name' => $name,
'age' => $age,
]);
});
As you can see, this isn’t quite as much of an improvement as our previous refactorings-with-tap
were. But in 5.4, you can now use tap()
to call methods on the tapped object, and those methods—even if they natively return something other than the object—will return the object. Here’s how:
return tap($user)->update([
'name' => $name,
'age' => $age,
]);
Now that’s clean.
In Laravel 5.4, we got the tap()
method in our collections as well. Using tap()
inline temporarily pauses your collection pipeline and passes you an instance of the collection itself, but once your tap
Closure has executed, the pipeline continues as if nothing had happened.
Note that whatever you return
from your tap
method is just thrown away. This isn’t for returns; it’s for debugging with var_dump
or for writing to a log or for performing some separate action.
return collect($peopleArray)
->sortBy('name')
->tap(function ($people) {
// Useful for debugging
var_dump($people);
})
->filter(function ($person) {
return $person->syncable === true;
})
->tap(function ($people) {
// Useful for performing some operation without
// requiring a temporary variable
app('thirdPartyService')->syncPeople($people);
});
My first response was to compare this method to pipe()
and each()
, but they’re different in that each()
is passed each of the items in the collection one at a time where tap()
is passed the entire collection; and pipe()
modifies the collection to be whatever you return from the method, whereas tap()
discards your return.
I’m a huge fan of anything that makes our code simpler to read for developers down the road. I’m also a huge fan of coding practices that make our code more expressive and that avoid unnecessary extra code.
tap()
, and collection pipelines before it, are strange in that, when they’re first introduced to your programming lexicon, there’s a cost; until you wrap your head around how tap()
works, it’s actually more cognitive work than not using tap()
. Same with collection pipelines.
Just like collection pipelines, I’ve also seen people get so excited about tap()
that they use it everywhere—often in places where it doesn’t belong. Just because something is new, doesn’t mean you should use it everywhere.
All those caveats having been given, though, I do think tap()
has a place in our programming vocabulary, and I’m glad it’s here to stay. Just search for tap()
in the Laravel codebase and you can see how many places Taylor and others are putting tap()
to good use.
I hope this longer writeup of tap()
has helped it really click in your brain. Let me know on Twitter if anything in this article isn't clear.
Update: After I wrote this post, Taylor also released an intro post on Medium covering a lot of these topics from the official angle.
Horizon is a package for configuring and understanding your queues. It provides you control, insight, and analytics into the number of queues and queue workers you have, your failed jobs, and your job throughput. Horizon makes it easy to configure your queues and see how they're doing.
Using code-based configuration, just like you're used to with any other Laravel apps and components, you can tell Horizon how many supervisors to run and for each define which connection they'll use, which queues they should operate on, which mechanism to use for balancing the work, and the maximum number of processes they can spin up.
Horizon makes it easy to define all of these settings uniquely for each environment.
<?php
// horizon config file
[
// ...,
'environments' =>
'production' => [
'supervisor-1' => [
'connection' => 'redis',
'queue' => ['default'],
'balance' => 'simple',
'processes' => 10,
'tries' => 3
]
],
'local' => [
'supervisor-1' => [
'connection' => 'redis',
'queue' => ['default', 'notifications'],
'balance' => 'simple',
'processes' => 20,
'tries' => 3,
'min-processes' => 5, // optional config
]
]
// ...
'waits' => ['redis:default' => 5] // If I read this syntax correctly, sets how long to wait before consider queue "backed up"
];
There are a few commands you can pass to Horizon. Here are those Taylor shared:
php artisan horizon:pause # pause but not stop worker
php artisan horizon:continue # resume after pause
php artisan horizon:terminate # gracefully stop during deploy process
php artisan horizon:snapshot # take a metrics snapshot; cron as often as you want
Horizon provides you with a few key metrics on your entire queue:
It'll also show you a list of the supervisors you have running, how many processes they're supervising, which queues they're operating on, and whether they're a balancing supervisor.
Horizon provides throughput-over-time and runtime-over-time graphs for each of your individual queued jobs.
Note: This also works for anything else that's queued. Event listeners, notifications, queued mail, etc.
Horizon makes it simple to "tag" your queued jobs and to monitor for given tags, giving you even better insight into certain classes of jobs or certain users.
To tag a job, add it to the tags()
method on the job:
class MyJob
{
// ...
public function tags()
{
return ['videos', 'video:' . $this->video->id];
}
}
Then later you can go to the tag monitoring and choose to pull just specific tags; e.g. customer emails having trouble with Invoice 14; you can monitor Invoice:14
tag and just watch it for a bit to see what's happening, failing, etc.
If you don't provide a tags
method, Laravel will auto-tag jobs by the Eloquent model Ids; if you have a "Video" eloquent model attached to your job with the ID of 4, your job will automatically get the tag App\Video:4
applied to it.
Horizon keeps track of the most recent queued jobs and shows all of the important information you might want to know about each: which queue they ran on, which tags they had, when they were queued, how long they took to run, and whether or not they succeeded.
For each failing job, Horizon tracks the stack trace and all of the relevant data, making it easy to understand the reason for the failure. You can also choose to retry a failed job after resolving whatever caused it to fail.
If you retry a job, you can see all of the additional tries after the first to see how they did (or didn't) change.
Even if you're not monitoring a tag, you can search the "Failed Jobs" list for any tags—allowing you to debug after the fact.
Failed jobs are retained for seven days (configurable) and the ability to search their tags is retained for 48 hours. All of this metadaa is stored directly in Redis.
You may have noticed that one of the options in the configuration is "balance", which is set to "simple" in each of the given examples. Queue balancing describes strategies to handle how Horizon splits resources between two queues.
Horizon can send SMS or Slack messages to notify the app owner if the wait is getting long on a queue.
// AppServiceProvider
Horizon::routeSlackNotificationsTo('slack endpoint');
Horizon::routeSmsNotificationsTo('phone number');
"Long" is defined via a "waits" configuration setting.
On first installation, dashboard is local-only.
You can also choose who has access to it:
// AppServiceProvider
// Choose who can see the dashboard
Horizon::auth(function ($request) {
return true;
});
I'm looking forward to diving into Horizon as soon as I can get my hands on it, and I'll be sure to write up anything I haven't covered here. But as someone who runs quite a few projects that rely on queues, I'm very much looking forward to adding Horizon to everything.
]]>php artisan preset react
; default React componentsphp artisan preset none
; axis & blank SassRoute::view('/welcome', 'welcome')
returns welcome view
Route::redirect('home', 'dashboard')
returns redirect to another URIBlade::if('public', function () { return app()->context()->isPublic(); });
@public
/ @endpublic
Route::get('preview', function () {
return MyMailable;
});
report()
method on exceptions define how to report itrender()
method on exceptions define how to render itclass MyException extends Exception
{
public function report()
{
// send wherever
}
public function render()
{
return view('error-or-whatever');
}
}
toResponse()
method which shows how to convert this object to a responseclass Thing implements Responsable
{
public function toResponse()
{
return 'This is a great response! ' . $this->id;
}
}
(a.k.a. Anonymous notifications)
// Easy way to notify people who aren't in your system as Notifiable
Notification::route('mail', 'taylor@laravel.com')
->notify(new App\Notifications\NotifyThingHappened);
$this->validate()
php artisan make:rule
passes()
returns boolean; receives name and valuemessage()
returns error message if neededUse:
$this->validate([
'myfield' => [
'string',
'required',
new App\Rules\MyValidationRule
]
);
If you have a proxy in front of your app like a CloudFlare proxy or something else that provides the SSL which then at your proxy, Laravel currently cannot correctly detect that it's an HTTP request. It's getting requests from port 80 instead of port 443.
TrustProxies Middleware, extending Fideloper's TrustedProxy, teaches the system how to trust the proxy that their forwarded headers (including those identifying that it's SSL) are trustworthy.
$proxies
property on the middleware allows you to define which proxies are trusted.
down
migrations and just wipes the whole database before re-up
ing.DatabaseMigrations
and DatabaseTransactions
; new combined way of doing itDatabaseMigrations
) but it's faster because we're not re-migrating every time (solved by DatabaseTransactions
); now both in one worldRefreshDatabase
// Usage
class MyTest extends TestCase
{
use RefreshDatabase;
}
WithoutExceptionHandling
middlewareconfig/app.php
provider array; now, set a block in each package's composer.json
that teaches Laravel to discover it without any manual workphp artisan package:discover
runs the discovery processVendor:publish
menuapp/Console/Kernel.php
commands arraydispatch((new App\Jobs\PerformTask)->chain([
new App\Jobs\AnotherTask,
new App\Jobs\FinalTask($post)
]));
$deleteWhenMissingModels
to true it just deletes itself without even failingIf you're not familiar with Lambo, it's a command-line tool I built to quickly spin up a new Laravel application and take some of the most common steps you may want to at the beginning of each project. Check out our writeup on the Tighten blog if you want to learn more.
I use Lambo all the time. I create a lot of apps, yes, but I also write and teach about Laravel a lot. Before Lambo, I would hack my examples and tests into a pre-existing app to make sure things worked the way I wanted. With Lambo, I now just spin up a quick new Laravel install every time I need to test anything. I love Lambo.
I love the customization flags you can pass into Lambo. "I want to open the code in Sublime Text, the site in Chrome, and I want to run npm install
afterward." Cool.
But the switches are hard to remember; and, honestly, it's harder to remember to even type them. lambo MyApplication -e subl -q imconfused -zztop
... more often than not I type lambo MyApplication
and then two seconds later yell "crap" and CMD-c and try to re-do it with the right flags. At that point, Lambo's still faster than not-Lambo, but it's starting to bother me.
And, of course, there are still some common tasks that you could never perform with Lambo—for example, installing your use-on-every-app packages.
config
fileSo, I made an issue. I had some discussion with folks who take a look at Lambo's issues, and we came up with the idea of a config file that lets you set your defaults.
Thanks to @_cpb, you can create a config
file (which lives in ~/.lambo/config
) that's a .env
-style set of key/value pairs. Each key represents one of the variables that is toggled by one of the command-line flags. This way, you do't have to remember to always set your code editor; just set it once in config
and it sticks.
Just upgrade to the latest (composer global update tightenco/lambo
) and then create a config file (lambo make-config
). Here's mine:
#!/usr/bin/env bash
PROJECTPATH="."
MESSAGE="Initial commit."
DEVELOP=false
AUTH=false
NODE=true
CODEEDITOR=subl
BROWSER=""
LINK=false
Edit that file and its flags will be passed to every new application you create with Lambo.
after
fileThat covers you for remembering the config flags. But what about other operations you take every time you spin up a site?
We have you covered for that, too. Once again, I opened up an issue, we had some conversation, and a different contributer @quickliketurtle wrote the actual code. You can now create an after
script (~/.lambo/after
), a shell script that runs after Lambo's normal processes, and you can define literally anything you want in there.
Here's what mine looks like:
#!/usr/bin/env bash
# Install additional composer dependencies
echo "Installing Composer Dependencies"
composer require barryvdh/laravel-debugbar
# Copy standard files from ~/.lambo/includes into every new project
echo "Copying Include Files"
cp -R ~/.lambo/includes/ $PROJECTPATH
# Add a git commit after given modifications
echo "Committing after modifications to Git"
git add .
git commit -am "Initialize Composer dependencies and additional files."
Just like with the config file, upgrade Lambo (composer global update tightenco/lambo
) and then create an after
file (lambo make-after
). Edit that file and it will run every time after Lambo runs.
I put a nitpick.json
file in that includes/
directory so my Nitpick config is set up right on every project I start.
Now it's easier than ever to use Lambo to spin up the perfect Laravel application, every time.
]]>Laravel devs: what are the packages you install on *every* app?
— Matt Stauffer (@stauffermatt) July 14, 2017
I wanted to know for my talk, but I was also just curious for my own purposes. Are there any packages I should check out that everyone else already knows about?
Here's what I found, in order of the number of recommendations I received:
Dang, y'all love these packages.
They don't necessarily work on every app, but these still came very highly recommended.
These had two or three recommendations—enough to pique my interest, but clearly not globally installed.
It's hard to just ignore someone who went to the effort to compose a tweet. Here you go, folks. Only one recommendation for each of these; but someone, somewhere, wanted to recommend it.
Note: I ignored a few recommendations that were extremely context-specific (Mongo) or which were recommended by the package author. :)
A few folks instead pointed to publicly available app skeletons, each of which has their own unique set of packages and customizations:
At Tighten, we currently don't use a skeleton, but I do have some cool news to share at Laracon that will show how easy it is to do some of this same work, even without a skeleton.
There's another post Eric Barnes linked me to where Mike Erickson asked people their one favorite package—similar to this question, but not quite the same. Check it out: What is the one package you install in all Laravel projects?
It looks like they found the same thing as I did: Debugbar and IDE helper are the winners in our community. If you haven't checked them out, be sure to do so!
Did I miss any absolutely vital packages here? Let me know on Twitter.
]]>Within the last year or two I've watched references to service workers and PWAs, or Progressive Web Apps, go from never-heard-of-them to every-other-tweet. So! Let's go learn. What is a PWA? What's their history, purpose, and value? Why do we care?
Tighten Blog: A Brief Introduction to Progressive Web Apps, or PWAs
]]>This question brings up the point that, unlike a framework backed by a company, a framework backed by an individual relies on that individual's desire and ability to keep the project running. What happens if Taylor decides he wants to retire and be a goat farmer?
I'd like to share a few points in response to this concern.
Note: I also recorded a Five-Minute Geek Show about this back in 2015.
Most simply, I think the majority of people with this concern have never stopped to just look whether this has been considered. It has. For a long time.
Taylor even shared his answer on Reddit about a year ago:
If anything ever happens to Taylor, Jeffrey Way of Laracasts will take over. Jeffrey has been here since almost the beginning, has access to everything he needs to keep the products and the framework running, and is a great developer and teacher with a vision for the framework.
Many folks have also said that what they really want is a company instead of a person.
Well, here ya go: Taylor may be the primary creator and maintainer of Laravel, the open source framework, but the ecosystem of tools around Laravel is managed by Laravel, LLC, a company with an owner (Taylor) and an employee (Mohamed Said). If Taylor ever disappears, the company still exists. It still has a flow of revenue and an employee to run it.
Sure, the company doesn't have 500 employees, but it is also not just tied to Taylor's personal social security number and brain. There are systems and structures in place, already.
Let's say there weren't a plan, and Taylor did disappear. Let's say Laravel, LLC and Mohamed didn't exist. Let's say the formal plan for Jeffrey to take over weren't already in place.
If, in that non-existent circumstance, Taylor disappeared, Laravel Forge and Envoyer and Spark would be effectively end-of-lifed.
... and, just like when EllisLab completely dropped the ball on CodeIgniter for many years, the community of contributors and users of the framework would continue to develop it, adding new features. If necessary, frameworks would fork off of Laravel and at least one would start as a near-mirror. Laravel itself would get security updates and bug fixes by the massive community of people who submit pull requests to the framework every day.
I can't specifically speak for other consultancies who use Laravel, but Tighten has already committed time, effort, and finances to support the ongoing development of Laravel. This wouldn't stop if Taylor disappeared. Ideologically, we want the work to move forward and would do what we could to support the work and the community of Laravel.
But for those of you who may pooh pooh our ideological goals, there are also pragmatic reasons for us to actively work for the good of Laravel. We love the tool. There's a reason it's the tool we pick for the majority of our projects: it's a fantastic tool and a fantastic ecosystem. We make money using Laravel. We have no interest in it going away and we're committed to seeing it succeed.
There's plenty more, and I hope to find some time to write more posts about other aspects of Laravel's enterprise readiness or non-readiness. But this is the simplest, easiest to address, so I hope we can stop bringing it up and call this handled. Good? Good.
]]>root root 4096 Mar 29 18:44 .
root root 4096 Mar 28 14:15 ..
root root 47 Mar 29 14:54 current -> ./releases/1490824249
root root 4096 Mar 29 14:50 releases
Where you're expecting to see your webroot containing your Git repository, instead it's this weird structure. What gives?
The reason you're getting zero-downtime deploy from these tools is because the entire deploy process—clone, composer install, etc.—doesn't happen in the directory that is currently serving your site. Instead, each new release gets its own separate "release" directory, all while your site is still being served from its current "release" directory.
- current -> ./releases/1490802721 * apache/nginx serves from this directory
- releases
- 1490802133 (the new release you're building right now)
- 1490802721 (latest complete release)
- 1490803081 (a little bit older release)
- 1490824249 (an even older release)
All of these release directories are just subdirectories of releases
. Each directory here represents one of your deploys, and each directory individually has everything needed to serve your site. Your web server points to yourproject/current/public
and therefore the "currently served" release is just that which has a symlink pointed at it from yourproject/current
.
So, once the build process is complete for each new release, your deploy tool will delete the current
symlink and create a new current
symlink that points to your latest release. Boom. Now that release is live.
In general, Laravel is no different from any other project in that this style of deployment works great. In fact, a tool provided by Taylor Otwell, Envoyer, is predicated around this release style.
However, every tool has a different set of caveats around how to handle them well in zero-downtime settings. Here's why:
There are always some things that you want to persist between releases. Most of it lives in databases and caches, which is fine—nothing's wiping your database on every deploy. But some isn't. Take the storage
folder; do you want to wipe that every time you push a new release? Naw. What about the .env
file? Definitely naw. So there are a few quick tricks.
Remember: If you use Envoyer, this is all handled for you. But if you don't, here's what to do.
composer install -o --no-interaction
php artisan migrate --no-interaction --force
npm install
(or yarn
) and either gulp --production
(Elixir) or npm run production
(Mix)rm -rf storage && ln -s ../../storage ./
storage
directory and symlink it to a storage
folder in the parent)ln -s ../../.env ./
.env
file to a .env
file in the parent)php artisan route:cache
php artisan config:cache
current
symlink to your new release. This should be handled by your deploy tool.php artisan queue:restart
This is all you need to do on every deploy. As you can tell, you'll end up with a root directory that looks a bit more like this:
root root 4096 Mar 29 18:44 .
root root 4096 Mar 28 14:15 ..
root root 1033 Mar 29 18:44 .env
root root 47 Mar 29 14:54 current -> ./releases/1490824249
root root 4096 Mar 29 14:50 releases
root root 4096 Mar 29 14:51 storage
And, of course, you'll need to create the .env
file and the storage
directory and subdirectories before you run your build script for the first time.
But that's it! You're now ready to go! Get deploying!
Looking for a list of steps to take on every deploy, regardless of whether or not it's Capistrano?
composer install -o --no-interactions
php artisan migrate --no-interaction --force
yarn && npm run production
php artisan route:cache
php artisan config:cache
php artisan queue:restart
]]>laravel new my-react-project
cd my-react-project
package.json
Here's what the devDependencies
section of the default package.json
looks right now:
{
"devDependencies": {
"axios": "^0.15.3",
"bootstrap-sass": "^3.3.7",
"cross-env": "^3.2.3",
"jquery": "^3.1.1",
"laravel-mix": "^0.8.3",
"lodash": "^4.17.4",
"vue": "^2.1.10"
}
}
You'll definitely want to drop this entry:
"vue": "^2.1.10"
If you want, you can also drop Axios (a fantastic, framework-agnostic, HTTP client) and Lodash (a simple JS library providing collection support and other convenient tools augmenting JS's somewhat sparse API).
You can drop jQuery and Bootstrap-sass, if you won't be using them, and if you're not a Windows user you can drop cross-env
.
Note: if you drop the
cross-env
dependency, you also have to remove the string "cross-env " from the beginning of each of thescripts
lines (e.g.cross-env NODE_ENV=...
becomesNODE_ENV=...
).
So, if you wanted a brand new project that uses Mix but no other dependencies, here's what your package.json
would look like:
{
"private": true,
"scripts": {
"dev": "NODE_ENV=development node_modules/webpack/bin/webpack.js --progress --hide-modules --config=node_modules/laravel-mix/setup/webpack.config.js",
"watch": "NODE_ENV=development node_modules/webpack/bin/webpack.js --watch --progress --hide-modules --config=node_modules/laravel-mix/setup/webpack.config.js",
"watch-poll": "NODE_ENV=development node_modules/webpack/bin/webpack.js --watch --watch-poll --progress --hide-modules --config=node_modules/laravel-mix/setup/webpack.config.js",
"hot": "NODE_ENV=development node_modules/webpack-dev-server/bin/webpack-dev-server.js --inline --hot --config=node_modules/laravel-mix/setup/webpack.config.js",
"production": "NODE_ENV=production node_modules/webpack/bin/webpack.js --progress --hide-modules --config=node_modules/laravel-mix/setup/webpack.config.js"
},
"devDependencies": {
"laravel-mix": "^0.8.3"
}
}
Once you modify your package.json
file, if you're using Yarn, you'll want to update your yarn.lock
file:
yarn upgrade
In resources/assets/js/app.js
, our default script requires a bootstrap and sets up a sample Vue component. Here's what it looks like right now (with comments removed):
require('./bootstrap');
Vue.component('example', require('./components/Example.vue'));
const app = new Vue({
el: '#app'
});
If you are dropping all the other dependencies as well (Axios, jQuery, Lodash, etc.) you can just delete all the code in app.js
, and delete the bootstrap (resources/assets/js/bootstrap.js
) and the sample Vue component (resources/assets/js/components/Example.vue
).
If you're planning to keep all the non-Vue components, you'll want to delete the Vue lines of app.js
, and then modify the bootstrap. Open resources/assets/js/bootstrap.js
and drop this line: window.Vue = require('vue');
.
That's it! You've wiped all the Vue out of the app and optionally all the other dependencies.
In case you're like me and want all the other dependencies but just not Vue some times, here's a short task list for that:
package.json
Vue.component(...
line and the const app = new Vue({
block from resources/assets/js/app.js
window.Vue = require('vue');
from resources/assets/js/bootstrap.js
resources/assets/js/components
directory]]>Have you ever needed to pull some data from a Google Spreadsheet? My default in the past would be to export the data and upload it to the app directly, but it turns out it’s not very difficult to read directly from Google Spreadsheets using the Google Drive API.
If you're not familiar with Laravel Elixir, it's a wrapper around Gulp that makes it really simple to handle common build steps—CSS pre-processing like Sass and Less, JavaScript processing like Browserify and Webpack, and more.
In Laravel 5.4, Elixir has been replaced by a new project called Mix. The tools have the same end goals, but go about it in very different ways.
If you take a look at the default files for Elixir and Mix, you'll see they're very similar:
// Elixir's gulpfile.js
const elixir = require('laravel-elixir');
require('laravel-elixir-vue-2');
elixir((mix) => {
mix.sass('app.js')
.webpack('app.js');
});
// Mix's webpack.mix.js
const { mix } = require('laravel-mix');
mix.js('resources/assets/js/app.js', 'public/js')
.sass('resources/assets/sass/app.scss', 'public/css');
Looks pretty similar, right? Sure, Elixir's calls are happening in an anonymous function, and Mix seems to prefer explicitly providing the source and destination, but we're doing pretty much the same thing here.
There's one big difference you'll experience on day one: where with Elixir you ran using either gulp
or gulp watch
, with Mix you'll run npm run dev
or npm run watch
. (You can also run npm run hot
for "HMR", or Hot Module Reloading, which "hot reloads" your Vue files but not other assets; or npm run production
to generate your assets with production settings like minification).
Just like with Elixir, your default Sass file will be in resources/assets/sass/app.scss
(and the file is exactly the same), and just like with Elixir your default JS file will be in resources/assets/js/app.js
(and the file is exactly the same—to learn more about the new-to-5.3 Vue-based structure, check out my post about frontend structure in 5.3).
If you dig into the bootstrap file that's included in app.js
(resources/assets/js/bootstrap.js
), you'll see that we're setting our X-CSRF-TOKEN
using Axios instead of Vue-Resource (Vue-Resource was retired in 2016).
If you run npm run dev
on a Mix project, this is what you'll see:
Our generated files end up in the same place by default that they did with Elixir: public/css/app.css
and public/js/app.js
.
As you've already seen, you can easily mix Sass and JS; Sass, predictably, runs on your Sass file(s) and outputs them as CSS. The JS method gives you access to ES2015, .vue
(Vueify) compilation, production minification, and a host of other processing on your JavaScript files.
You can also mix Less:
mix.less('resources/assets/less/app.less', 'public/css');
You can combine files together:
mix.combine([
'public/css/vendor/jquery-ui-one-thing.css',
'public/css/vendor/jquery-ui-another-thing.css'
], 'public/css/vendor.css');
You can copy files or directories:
mix.copy('node_modules/jquery-ui/some-theme-thing.css', 'public/css/some-jquery-ui-theme-thing.css');
mix.copy('node_modules/jquery-ui/css', 'public/css/jquery-ui');
Unlike Elixir, source maps are now disabled by default, but you can bring them back:
mix.sourceMaps();
Operating system notifications are enabled by default, but if you don't want them to run, you can disable with the disableNotifications()
method.
Mix.manifest.json
and cache-bustingIf you're familiar with Elixir, you might notice one thing in that output image above that is a little different from Elixir: Mix is generating a manifest file out of the box (public/mix-manifest.json
). Elixir also generated a manifest file (public/build/rev-manifest.json
), but it would only generate it if you explicitly enabled the cache-busting (versioning) feature. Mix generates it regardless.
If you're not familiar, these manifest files are maps between a file path (e.g. /js/app.js
) and the path for the versioned copy of that file (something like /js/app-86ff5d31a2.js
). That way you can have simple references in your HTLM (<script src="{{ mix('js/app.js') }}">
) that point to your versioned files.
Unlike Elixir, however, Mix generates this file even if you're not using cache busting, but it's just a direct map:
{
"/js/app.js": "/js/app.js",
"/css/app.css": "/css/app.css"
}
Another interesting change for those who've used Elixir before: your built files now end up in their normal output directories, not a separate build
directory; so your versioned JS file, for example, will live in public/js/app-86ff5d31a2.js
.
To enable cache busting in Mix, just append .version()
in your Mix file:
mix.js('resources/assets/js/app.js', 'public/js')
.sass('resources/assets/sass/app.scss', 'public/css')
.version();
This is a lot simpler than passing the actual file names like you had to in Elixir.
mix()
helperAs I mentioned above, the frontend helper you'll want to use to reference your assets is now mix()
instead of elixir()
, but it still functions exactly the same. If you use Mix, you'll want to remove these lines in the default Laravel template:
<link href="/css/app.css" rel="stylesheet">
...
<script src="/js/app.js"></script>
And replace them with these:
<link href="{{ mix('/css/app.css') }}" rel="stylesheet">
...
<script src="{{ mix('/js/app.js') }}"></script>
Remember, this function just looks that string up in mix-manifest.json
and returns the mapped build file. It's only necessary when you're using cache busting, but it doesn't hurt to just use it by default, because that'll make it a lot easier to add cache busting down the road if you want it.
Webpack is exciting to many people in part because of the intelligence it offers about the structure of your code. I don't yet fully understand–and Mix doesn't handle out of the box–anything like tree shaking, but it does make it simple to differentiate your custom code (which might change often) from your vendor code (which shouldn't), making it far less likely that your users will have to refresh all of your vendor code every time you push a new build.
To take advantage of this feature, you'll want to use the extract()
function, which allows you to define that a given set of libraries or modules (keyed by the same string they're keyed in npm and require()
statements) will be extracted into a separate build file named vendor.js
:
mix.js('resources/assets/js/app.js', 'public/js')
.extract(['vue', 'jquery']);
In this circumstance, Mix has now generated three files for me: public/js/app.js
, public/js/vendor.js
, and a third Webpack-specific file, public/js/manifest.js
. I need to import all three, in this order, in order for it to work:
<script src="{{ mix('/js/manifest.js') }}"></script>
<script src="{{ mix('/js/vendor.js') }}"></script>
<script src="{{ mix('/js/app.js') }}"></script>
If you're using cache busting, and you make changes to your app-specific code, your vendor.js
file will now still remain cached, and only your app-specific code will be cache busted–making your site load much faster.
If you're interested in adding your own custom Webpack configuration, you can; just pass your Webpack configuration in:
mix.webpackConfig({
resolve: {
modules: [
path.resolve(__dirname, 'vendor/laravel/spark/resources/assets/js')
]
}
});
(I'm not a Webpack guru, so I'm just going to paste that example in straight from the docs.)
Let's say you're interested in doing some conditional cleverness in your Webpack file. Maybe you want to copy one thing when production
is run but not other times. How exactly would you do that?
The first place I looked was the Node environment object, which we have access to as process.env
. We can check any values there–including any global environment variables on your system, which may open up an interesting opportunity, so we could conditionally check the process.env.NODE_ENV
value:
if (process.env.NODE_ENV == 'production') {
mix.webpackConfig({ ... });
}
But after reading the source, I could tell NODE_ENV
was not intended to be the primary check; instead, there's a configuration object with an inProduction
flag on it. This isn't documented, so use with caution, but you can update the import at the top of your Webpack file and then use that config object:
const { mix, config } = require('laravel-mix');
if (config.inProduction) {
mix.webpackConfig({ ... });
}
You can take a look at your package.json
and see the list of dependencies that are included with each project. Remember, these are just those that are pulled by the default app.js
and bootstrap.js
, but you can just delete the references out of app.js
and package.json
and re-run npm install
and they won't end up in your final files.
app.scss
file to pull Bootstrap styles in)Laravel Mix is a build tool that replaces Laravel Elixir. It has almost the same API, but is based on Webpack instead of Gulp. The end. Go build great things.
]]>I've spent the last five to ten years trying to make small changes for the good of the world by working in the relationships I already have, in person and online, to help White Americans become more engaged and interested in working toward justice. A little bit at a time, over coffee or surprisingly decent Facebook comment threads.
On November 9th, my wife turned to me and said: "Matt, it's time for you to stop trying to change individual people on Facebook and go do something real." Ouch. But she was right.
Right around that time DeRay Mckesson put out a call to programmers who wanted to help work for social change. I responded, as did quite a few others, and I met DeRay and Sam and Aditi and a few other incredible individuals really making a difference. Over the span of a few weeks I had the chance to work on The Resistance Manual and a few other great projects.
During this time I've had no less than a dozen friends in tech ask me, "How can I as a technologist contribute to social progress?" I wanted to make that question as easy to answer as possible, and I knew there are far more projects out there than just those we were working on at StayWoke. So I decided to catalog them all in one space.
The first version of the site was a static site, hosted using GitHub pages, pulling its data using JavaScript from locally-hosted JSON files. The idea here was to make it easy for folks to contribute: make a GitHub pull request updating the JSON and we'll handle the rest.
The problem is, JSON isn't that user-friendly, and pull requests aren't either. I wanted to keep the same spirit of GitHub pages–simple, easy to spin up, editable by anyone–but on a dynamic server. Turns out, the answer is Gomix (formerly Hyperdev).
Gomix is a platform that makes it absurdly easy to spin up a new app (static HTML or Node) and see it online instantly. You can also invite your friends to collaborate, and the moment you make a change in the editor, your site updates. So, at this point I'm using Gomix and Node, and Express is an easy pick.
I strongly considered using Firebase for data storage, but the Gomix team linked me to this Gomix site using Google Spreadsheets as the backing data source and I really wanted to try it out.
So we've now settled: I'll take my old HTML and JavaScript, but instead of the JavaScript loading its data from JSON files, I'll run an Express app on Gomix pulling the data from Google Spreadsheets and output its data in a JSON format. No big deal.
Gomix treats your code as a shareable document which can be collaborated on and "remixed," or copied into a new project–like GitHub's forking, but with no ties to the original app. Hit "remix" on any public Gomix project and it'll copy all its code into a new project that you own with a randomly generated name.
So the first thing I did was "remix" that data dashboard app. Why start from scratch, especially as someone who's literally never written Node code in my life? The code for accessing Google Spreadsheets looks like this:
const GoogleSpreadsheets = require('google-spreadsheets')
GoogleSpreadsheets({
key: 'google spreadsheet id here'
}, function(err, spreadsheet) {
spreadsheet.worksheets[0].cells({
range: 'R1C1:R20C9'
}, function(err, result) {
// result is the entire sheet within the provided range
})
})
And, if you've never worked with Express before, you teach the server how to run it using the start
script key in package.json
:
{
...,
"scripts": {
"start": "node server.js"
},
...
}
Now, we just edit server.js
(simplified version here to give the gist of it):
const express = require('express')
const app = express()
app.get('/', function (request, response) {
response.sendFile(__dirname + '/views/index.html')
})
// listen for requests :)
const listener = app.listen(3030, function () {
console.log('Your app is listening on port ' + listener.address().port)
})
If you have your dependencies set up right, the above app can viewed at localhost:3030
by simply running npm install && npm start
on your command line. It's brilliantly simple.
So we have a working Express app. It's running on Gomix, so literally every time I edit the files, Gomix updates the server and it's completely accessible at my staging URL. I know how to pull data from Google Spreadsheets.
All that remained was getting my data into Google Spreadsheets and running some transformations over the returned data to structure it like JSON so my pre-existing JavaScript could consume it. It looked a little bit like this:
// server.js
const json = require('./controllers/json')
app.get('/data/orgs.json', json.orgs)
// ... repeat for tools, projects, resources, data sources
// controllers/json.js
const orgs = (req, res) => {
// get all the data from the "Organizations" sheet in our Google Spreadsheet
// transform all the data
// return it as JSON
}
// ...repeat
export {
orgs
}
And here's what my spreadsheet is shaped like (this is the organizations sheet):
We now have tech-forward-2.gomix.me/data/orgs.json
returning JSON cobbled together from our Google Spreadsheets data. It was easy after that to set up a few Google Forms allowing people to suggest additions to the app (and it was intermediately difficult to set up a custom domain, but they tell me that will be easier very soon).
So, I launched the project and everything worked great. However, I heard from a few folks that the way I had implemented the JavaScript loading left screen reader users out in the cold, so yesterday I re-wrote the app to pass the Google Spreadsheets data directly to the view, dropping the AJAX entirely.
It made the app load much slower, but it was surprisingly easy to implement; I just set Express up to use Handlebars as its templating engine (which I had already been using on the frontend, so I could copy exactly the same templates with no changes) and passed the data directly into the views.
A little bit later, I googled "node express cache" and landed on Simple server-side cache for Express.js, and about 20 minutes later I had a 1-minute cache set up on the Google Spreadsheets calls.
You can see all of the code on Gomix, remix it yourself, or see the backup on GitHub.
That's it. It took me a few late-night coding sessions, a bit of Googling, and I have my first production Express app, consuming and caching data from Google Spreadsheets, hosted on Gomix. Beautiful.
]]>Important thanks: I learned everything I know about Express by reading code from my friend Pascal who learned Express about a week before I did. Also, thanks to DeRay and StayWoke for bringing me in and thanks to the entire Gomix team for being awesome.
First, a quick refresher: while everyone talking about testing uses words a little bit differently, it's pretty well agreed that Unit tests are responsible for testing little chunks of code (a single method on a single class, for example) in isolation, whereas Application tests (similar, or the same as, integration tests) test the entire application as a whole.
Since Jeffrey Way's "Integrated" package was brought into the core in Laravel 5.1, we've had access to methods like ->visit()
, ->get()
, ->see()
, etc.—making it seem like we were describing the actions of a browser visiting the site. This really transformed our ability to write application tests, making calls like this possible:
/** @test */
public function cta_link_functions()
{
$this->visit('/sales-page')
->click('Try it now!')
->see('Sign up for trial')
->onPage('trial-signup');
}
In the background, it was really PHP spinning up a request, passing it through our application, crawling the DOM, and then making more requests until the chain is done. There was no browser. But it felt like it.
What if any of your application's functionality relied on JavaScript, though? Sorry. Out of luck. Because this isn't a real browser, it didn't know or care about your JavaScript.
Over time, the desire to use and test JavaScript components in our Laravel apps grew, and so did the discontent that there was a growing number of o applications that were un-testable using the tools Laravel provided out of the box.
With Dusk, Taylor has completely re-written how application testing works in Laravel. Everything is now based on a tool called ChromeDriver, which is a standalone server that actually controls Chrome/Chromium. When you write application tests, Dusk sends your commands to ChromeDriver, which then spins up Chrome to run your tests in the browser and then reports back the results.
All of the non-application testing aspects of Laravel–its unit testing functionalities, and HTTP-request-based tests like $this->get()
–are still using the same code they always were. But the more advanced features like $this->visit()
just don't work at all out of the box. It's up to you to pull in an application testing package. You can either pull in Dusk (composer require laravel/dusk --dev
) or you can pull in the pre-5.4 application testing package (composer require laravel/browser-kit-testing --dev
).
Note: if you pull in Browser Kit Testing, you'll need to modify your
TestCase
to extendLaravel\BrowserKitTesting\TestCase
instead ofIlluminate\Foundation\Testing\TestCase
. Upgrading your test suite from a pre-5.4 app? Check out Adam Wathan's Upgrading Your Test Suite for Laravel 5.4.
Once you've brought Dusk into your application (composer require laravel/dusk --dev
), you'll need to register the service provider. You could add it to the list of service providers in config/app.php
, but that's not actually safe–Dusk, for the purpose of testing, opens up a lot of manual overrides that you don't want on your production site. Instead, conditionally register it in the register
method of AppServiceProvider
:
// AppServiceProvider
use Laravel\Dusk\DuskServiceProvider;
...
public function register()
{
if ($this->app->environment('local', 'testing')) {
$this->app->register(DuskServiceProvider::class);
}
}
Now we need to install Dusk, which will create a tests/Browser
directory.
php artisan dusk:install
You may never have used the APP_URL
key in your .env
file—it's often not actually necessary for many applications–but you'll need to set it now, since Dusk relies on it to visit your application. This will have to be an actually-accessible URL, because, remember, this is a real browser we're working with.
We now run our tests using php artisan dusk
, which can accept any arguments that PHPUnit can–for example, php artisan dusk --filter=the_big_button_works
.
Let's say I want to write a Dusk test just like our application test we looked at earlier–click a button and make sure it takes me where I want to go. Let's write.
php artisan dusk:make BigButtonTest
Let's open tests/Browser/BigButtonTest.php
and see what we get by default:
<?php
namespace Tests\Browser;
use Tests\DuskTestCase;
use Illuminate\Foundation\Testing\DatabaseMigrations;
class BigButtonTest extends DuskTestCase
{
/**
* A Dusk test example.
*
* @return void
*/
public function testExample()
{
$this->browse(function ($browser) {
$browser->visit('/')
->assertSee('Laravel');
});
}
}
A few things you'll notice that are different from what we're used to.
First, we have namespaces in our tests now! This is actually true in 5.4 whether or not you're using Dusk; by default there are two namespaces for our tests, Tests\Unit
and Tests\Feature
.
Second, DatabaseTransactions
and WithoutMiddleware
aren't imported by default anymore.
Third, we're no longer calling $this->visit
directly. We're now doing all of our testing in a closure in the context of the browse()
function, which encapsulates our Dusk calls.
Fourth, some of the methods available to us have been renamed to be a little more consistent with other assertions–for example, see()
is now assertSee()
.
Before we do anything else, let's just run our test. Remember, that's php artisan dusk
.
Did you see that? Real. Browser. Windows.
Let's make this do what it did before:
$this->browse(function ($browser) {
$browser->visit('/sales-page')
->clickLink('Try it now!')
->assertSee('Sign up for trial')
->assertPathIs('/trial-signup');
});
OK. We can do what we once could do. This is good. But what's new?
There are so many new features and new ways of interacting that I'm going to have to point you to the docs to learn everything. But here are a few pieces that are really distinctly different from how application testing used to work, so you can get a sense of what we're working with here.
It's possible to get or set the value (the "value" property) and get the text (text contents) or attribute of any element on the page given its jQuery-style selector.
// Get or set
$inputValue = $browser->value('#name-input');
$browser->value('#email-input', 'matt@matt.com');
// Get
$welcomeDivValue = $browser->value('.welcome-text');
$buttonDataTarget = $browser->attribute('.button', 'data-target');
Interacting with forms is very similar to what it was like previously, but let's cover it briefly. First, you can choose the name of the field if you'd like, or a jQuery-style selector.
$browser->type('email', 'matt@matt.com');
$browser->type('#name-input', 'matt');
You can clear any values:
$browser->clear('password');
You can select a dropdown value:
$browser->select('plan', 'premium');
You can check a checkbox or radio button:
$browser->check('agree');
$browser->uncheck('mailing-list');
$browser->radio('referred-by', 'friend');
You can attach files:
$browser->attach('profile-picture', __DIR__ . '/photos/user.jpg');
And you can even perform more complex keyboard- and mouse-based interactions:
// type 'hype' while holding the command key
$browser->keys('#magic-box', ['{command}', 'hype']);
$browser->click('#randomize');
$browser->mouseover('.hover-me');
$browser->drag('#tag__awesome', '.approved-tags');
Finally, we can scope any of our actions to a particular form or section of the site we're working on:
$browser->with('.sign-up-form', function ($form) {
$form->type('name', 'Jim')
->clickLink('Go');
});
This is probably the most foreign concept in Dusk. Because this is a real browser, it actually has to load all of the external assets on the page–which means your content may not be ready.
There are a few methods that help you work around this. First, you can just pause the test manually:
// Pause for 500ms
$browser->pause(500);
More commonly, you can wait (by default, up to 5 seconds) until a given element either appears or disappears:
$browser->waitFor('.chat-box');
// wait a maximum of 2 seconds for the chat box to appear
$browser->waitFor('.chat-box', 2);
$browser->waitUntilMissing('.loading');
$browser->waitForText('You have arrived!');
$browser->waitForLink('Proceed');
// wait and scope
$browser->whenAvailable('.chat-box'), function ($chatBox) {
$chatBox->assertSee('What is your message?')
->type('message', 'Hello!')
->press('Send');
});
// wait until JavaScript expression returns true
$browser->waitUntil('App.initialized');
Taking an example from the docs, what if you want to test that a websocket-based chat works? Just use two separate browser sessions:
$this->browse(function ($first, $second) {
$first->loginAs(User::find(1))
->visit('/home')
->waitForText('Message');
$second->loginAs(User::find(2))
->visit('/home')
->waitForText('Message')
->type('message', 'Hey Taylor')
->press('Send');
$first->waitForText('Hey Taylor')
->assertSee('Jeffrey Way');
});
As you can see, if you ask for more parameters in your browse()
closure, each will be passed a new browser session that you can interact with.
Our first browser logs in as user 1, visits the home route, and then waits (for up to 5 seconds) until it sees the text "Message," which in this test is representing the chat box appearing on the page. Next, our other user logs in as user 2, visits the home route, waits to see the chat box, and then types a message into it and hits send. Finally, our original user watches for that message to come through and asserts that the name of the second user (which we are presuming is named "Jeffrey Way") shows up.
Also note that loginAs
, which used to be named be()
or actingAs()
, can take either a User
instance or a user ID.
Most of the assertions in Dusk are the same as before, but many have new names, and there are a few new ones. Check the whole list here, but here are a few notable new assertions:
$browser->assertTitle('My App - Home');
$browser->assertTitleContains('My New Blog Post');
$browser->assertVisible('.chat-box');
$browser->assertMissing('.loading');
Reading longer and more complex sets of Dusk interactions can be hard to follow at times, so there's an optional concept called a Page that makes it easy to group functionality in your Dusk tests. A page represents a URL that can be used to navigate to it, a set of assertions that can be run to make sure the browser is still on this page, and a set of nicknames for common selectors.
To make a page, use the dusk:page
Artisan command:
php artisan dusk:page Dashboard
Here's what that generates for us:
<?php
namespace Tests\Browser\Pages;
use Laravel\Dusk\Browser;
use Laravel\Dusk\Page as BasePage;
class Dashboard extends BasePage
{
/**
* Get the URL for the page.
*
* @return string
*/
public function url()
{
return '/';
}
/**
* Assert that the browser is on the page.
*
* @return void
*/
public function assert(Browser $browser)
{
$browser->assertPathIs($this->url());
}
/**
* Get the element shortcuts for the page.
*
* @return array
*/
public function elements()
{
return [
'@element' => '#selector',
];
}
}
The url()
method is clear: it tells how to navigate to this page. The assert()
method is also relatively clear: "Consider me still on this page as long as this assertion passes."
The elements()
array makes it possible to create shorthand selectors you can use to refer to elements any time your browser is "on" this page. Here's a way we might choose to fill this out:
class Dashboard extends BasePage
{
public function url()
{
return '/dashboard';
}
public function assert(Browser $browser)
{
$browser->assertPathIs($this->url());
}
public function elements()
{
return [
'@createPost' => '#create-new-post-button',
'@graphs' => '.dashboard__graphs',
];
}
}
You can also manually create custom methods for interactions on each page. For example, one common behavior in your tests might be to set a few dropdowns and then click a "filter" button. Let's make it:
// Dashboard
public function filterGraph($browser, $filterStatus)
{
$browser->select('filterBy', $filterStatus)
->select('limit', 'one-month')
->press('Filter');
}
There are a few different ways we can use a Page. First, we can visit it, which both directs the browser to it and also loads our shorthand selectors:
use Tests\Browser\Pages\Dashboard;
...
$browser->visit(new Dashboard)
->assertSee('@graphs');
But what if we're already on this page because we clicked a button somewhere else? The on()
method loads up our Page:
use Tests\Browser\Pages\Dashboard;
...
$browser->visit('/)
->type('email', 'matt@matt.com')
->type('password', 'secret')
->press('Log in')
->on(new Dashboard)
->assertSee('@graphs');
Finally, here's how we use our custom methods:
$browser->visit(new Dashboard)
->filterBy('donors')
->assertSee('Sally');
You can also create global shorthand selectors you can use anywhere in your site in the default tests/Browser/Pages/Page
Page, which is loaded on every page. Just add them to its siteElements()
method.
// tests/Browser/Pages/Page
public static function siteElements()
{
return [
'@openChat' => '#chat-box__open-button',
];
}
OK, so you've seen how powerful this all is. A few side notes.
First, you can create a custom Dusk environment file at .env.dusk.local
(or .env.dusk.whateverEnvironmentYouWantToTest
).
Second, some of the methods require jQuery to select content on the page. Dusk will check whether your page loads jQuery, and if not, will inject it for you during the tests.
Finally, any time a test fails, Dusk will take a screenshot of the failed page for you and put it in the tests\Browser\Screenshots
directory. You'll see exactly what the page looks like:
That's all, folks. Enjoy! Remember, you can still keep writing the tests you've always written–and you can even pull in the old testing package, if you'd prefer. But there's a whole new world open to you now. Try it out a bit.
]]>php artisan sync:dates
—required you to create a new class for that command and register it in the Console Kernel. This is fine, but sometimes it feels like overkill for what might end up just being a single line of functional code.
As of Laravel 5.3, you'll notice that there's a new method in the Console/Kernel.php
file named commands()
, and it loads a new file at routes/console.php
. This new "console routes" file allows us to define Artisan console commands with a single Closure instead the prior "define a class then register it in the console Kernel" flow. Much faster, much easier.
So, open up routes/console.php
and you'll already see a sample command:
Artisan::command('inspire', function () {
$this->comment(Inspiring::quote());
})->describe('Display an inspiring quote');
As you can see, we have a new fluent builder for defining Artisan commands. We've got the signature
("inspire"), the handle()
(the closure), and the description
("Display an inspiring quote").
What if we have a parameter, or if we want to inject a dependency? It works just like it did with the old syntax.
Artisan::command('sync:conference {id}', function (JoindIn $joindin) {
$joindin->syncConference($this->argument('id'));
})->describe('Sync a given conference from JoindIn');
But here's something else interesting you can do that you can't with traditional Artisan command definition: you can take your signature arguments as parameters in the Closure, which is much more like what you'd expect if you were new to Laravel:
Artisan::command('sync:conference {id}', function ($id, JoindIn $joindin) {
$joindin->syncConference($id);
})->describe('Sync a given conference from JoindIn');
As you can see, we now have a simpler, more convenient, more fluent, and more compact way to define Artisan commands. Boom.
]]>The default configuration for many Linux server setups—including that for Laravel Forge-created servers—leaves a lot of old Linux headers sitting around every time your system downloads upgrades. Folders like linux-headers-3.13.0-53-generic (3.13.0-53.89)
, just full of hundreds and thousands of files, slowly taking over your server.
Normally this is no problem. The files are tiny. The server I'm working on right now has 30GB of disk space and 20GB free.
But this morning I started getting a series of tweets about my site being down. Thankfully, this isn't the first time I've hit this error, so it was an easy fix. But still, these tweets are no fun:
@stauffermatt Think you have a permissions issue on your site, bud. pic.twitter.com/lUAS9hWrru
— Craig Thompson (@Migweld) January 18, 2017
@stauffermatt Something has broken mate. Just as i was researching! pic.twitter.com/G02r0Kjlsv
— Mads Jürgensen (@InctorMads) January 18, 2017
@stauffermatt heads up! your blog fails to write cache file. Permissions or full disk issue, maybe?
— Damiano Venturin (@damko) January 18, 2017
These and a dozen more. Ouch. But I'm not out of disk space. What's going on?
Turns out, there's something most folks never run into: your server doesn't just have limited space; it also has a limited number of "inodes", which are essentially the objects that represent a file or a directory. Most people never run into this limit, because it's an absurdly high number. But there's something fun about the Linux headers I mentioned before: while they're tiny, there are thousands and thousands and thousands of them.
Here's how you know this is your problem: you're constantly seeing errors on your server that the server is out of disk space and can't do simple things like tab-autocomplete your typing, but when you check, you have plenty of space:
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 30G 3.4G 25G 12% /
What's your next step to verify this is really your problem? Check your inodes:
$ df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/vda1 1966080 1966080 0 100% /
If you see 100% IUse%
(or close to it), then this is indeed your problem: You have too many files and folders (inodes) on your machine.
There are two ways to fix this. First, you could manually look through your whole server to try to find the offending directories and figure out where they're coming from, and then manage them. Here's a great article describing how to do that, and you should try this option if the second option doesn't work.
Second, you could take my word that if you're on my blog, there's a really good chance you're hitting this error because apt-get
doesn't auto-remove old, unused packages, and this issue is likely happening because of extra, unused Linux header packages.
If your system lets you run this command, you're good to go:
sudo apt-get autoremove -y
This will tell apt-get
to remove anything it's installed that's currently not in use. That means all those old Linux headers—and plenty of other no-longer-needed depedencies.
However, you might not be able to run this command—because apt-get
needs to be able to write to your filesystem in order to do its work. If you get an error about "not enough drive space" here, don't fret. It's still pretty likely your issue is with those Linux headers, so let's go find them.
$ uname -r
3.13.0-43-generic
The output of the uname -r
command shows which version of the Linux kernel you're currently running. Remember this, because you don't want to delete this one.
List out the files in /usr/src
, and find a good chunk of headers which aren't yours:
$ cd /usr/src
$ ls -al
drwxr-xr-x 24 root root 4096 Jan 11 06:32 linux-headers-3.13.0-107
drwxr-xr-x 7 root root 4096 Jan 11 06:32 linux-headers-3.13.0-107-generic
...
drwxr-xr-x 24 root root 4096 Jan 11 06:32 linux-headers-3.13.0-43
drwxr-xr-x 7 root root 4096 Jan 11 06:32 linux-headers-3.13.0-43-generic
All we need to do right now is delete a good chunk of them so that we can let apt-get
handle the rest. Here's what I ran (be cautious; this is running sudo rm -rf
on system files. Screw this up and you tank your server.)
I noticed that I have a bunch of headers that begin with linux-headers-3.13-0-9
, so I'll delete all of them:
$ cd /usr/src
$ sudo rm -rf linux-headers-3.13.0-9*
Good. We just dumped thousands and thousands of files, and we can now rely on sudo apt-get autoremove -y
to clean up the rest of the system for us. Boom.
The simplest answer is "just run sudo apt-get autoremove -y
every once in a while".
You can try to automate it, but because it requires sudo
access, it's going to be tough and possibly dangerous. Here's one guy who tried.
I've run into this a few times over the years, and I definitely need to thank Chris Fidao (for personal help) and Ivan Kuznetsov (for his blog) for getting me through it.
]]>If you join a new open source project, it's very likely that you won't get direct access to push commits or branches up to the repository itself. So, instead, you'll fork the repo, make the changes on your version of the repo, and then "pull request" your changes back to the original.
Here are the steps to take.
Let's use the general-congress-hotline
project as an example. First, visit its page on GitHub, and click the "Fork" icon in the upper right of the page.
This will create a fork of the project under your user account.
Next, clone your local version down to your local machine.
git clone git@github.com:mattstauffer/general-congress-hotline.git
You now have a local representation of your fork.
In order to make it easy to keep your fork in sync with the original, add the original as a remote:
git remote add upstream https://github.com/StayWokeOrg/general-congress-hotline.git
If you check your remotes (git remote -v
), you can now see that you have two "remotes" that your local repo is pointed towards: origin
, which points to your repo, and upstream
, which points to the original. We'll get to why in a bit.
Since you want to branch from whatever the project's default branch is (this is often master
, but in the case of general-congress-hotline
it's development
), make sure you're on the default branch and it's up-to-date with the source repo. If you just forked it, it always will be—but if there have been a lot of changes to the original repo since you forked it, yours might be out of sync. Here's how to get yours in sync on a project where the default branch is development
:
git checkout development
git fetch upstream
git merge upstream/development
git push origin development
Now you can spin up your new branch:
git checkout -b my-feature-name
Make your changes, commit them, and push up to your forked repo for that branch:
touch new-file.text
git add new-file.txt
git commit -m "Added new-file.text"
git push origin my-feature-name
Now, you can create a pull request in the GitHub user interface. Visit your repo on GitHub and click the "New Pull Request" button, and you can create your PR from there.
Make sure to explain the purpose, context, and anything else necessary for reviewers to understand the PR. See GitHub's "How to write the perfect pull request".
That's it! The pull request will show up to the maintainers of the original repo and they can guide you from there.
]]>Thankfully, very little has changed on a user-facing front with regard to how queues work in Laravel 5.3.
The biggest change is that the command you would've once used to "listen" for queue jobs:
php artisan queue:listen
... is no longer the default. Instead, running queue:work
as a daemon is now the default:
php artisan queue:work
This was possible in the past by running php artisan queue:work --daemon
, but now, you don't have to pass --daemon
(instead, pass --once
if you want to only work on a single job), and Laravel is recommending you use queue:work
(daemon style) instead of queue:listen
as your default.
php artisan queue:listen
listens to your queue and spins up the entire application every time it operates on a queue job. This is slower, but doesn't require rebooting the worker every time you push new code.
php artisan queue:work
keeps the application running in between jobs, which makes it faster and lighter, but you'll need to restart the listener every time you push new code. The best way to do this is to run php artisan queue:restart
on every deploy.
It's now recommended that you run a Supervisor process on your Linux hosts to watch your queue listener and restart it if it gets stopped. The docs now have a writeup on how to set up Supervisor correctly.
Essentially, you're going to install it using apt-get
, configure it using the /etc/supervisor/conf.d
file, and define that the queue worker should be restarted if it fails. You can even define how many queue workers you'd like to run at a given time.
The last final change is one that's largely transparent to us as developers, but the new queue infrastructure has a different model of how the primary worker handles control of each job. It's complicated, but it gives us the benefit of the worker having a lot more control over the behavior of long-running or misbehaving queue jobs. The new system also takes advantage of PHP 7.1's pcntl_async_signals when it's available.
As a reminder, you can control these long-running jobs using --timeout
and retry_after
; you can define that a queue worker process will kill a child process if it takes longer than a given amount of time using --timeout
:
php artisan queue:work --timeout=90
Note that you can use this timeout
in combination with retry_after
, which is a setting in your queue configuration file. retry_after
defines how long the worker should wait before assuming that a job has failed and needs to be re-added to the queue for a second try. As the docs note, make sure that your retry_after
is at least a few seconds longer than your timeout
so you don't get an overlap spinning up multiple copies of the same job.
That's it for now! It's pretty simple and light stuff, but I think it makes the entire setup a little bit cleaner and more predictable. Good stuff.
]]>Let's explore 5.3's JavaScript stack together. Spin up a sample app using the Laravel installer (or, if you're like me, use Lambo) and open up the site in your favorite IDE.
package.json
First, take a look at our package.json
:
{
"private": true,
"scripts": {
"prod": "gulp --production",
"dev": "gulp watch"
},
"devDependencies": {
"bootstrap-sass": "^3.3.7",
"gulp": "^3.9.1",
"jquery": "^3.1.0",
"laravel-elixir": "^6.0.0-14",
"laravel-elixir-vue-2": "^0.2.0",
"laravel-elixir-webpack-official": "^1.0.2",
"lodash": "^4.16.2",
"vue": "^2.0.1",
"vue-resource": "^1.0.3"
}
}
We're now pulling in Vue 2 and Vue Resource (which, by the way, is being retired soon and I believe will soon be replaced), and we still have jQuery and Sass and Lodash.
Now let's take a look at our Gulp (Elixir) file:
const elixir = require('laravel-elixir');
require('laravel-elixir-vue-2');
/*
|--------------------------------------------------------------------------
| Elixir Asset Management
|--------------------------------------------------------------------------
|
| Elixir provides a clean, fluent API for defining some basic Gulp tasks
| for your Laravel application. By default, we are compiling the Sass
| file for your application as well as publishing vendor resources.
|
*/
elixir((mix) => {
mix.sass('app.scss')
.webpack('app.js');
});
Nothing too different here, other than that we're pulling in Vue at the top and we're using Webpack to minify our scripts instead of Browserify.
Note: Taylor & Jeffrey just announced this week that the next version of Elixir will be based entirely on Webpack, not Gulp, and it'll be named Mix!
app.js
fileSo where do we go from here? Let's take a look at app.js
(which is in resources/assets/js
) to see what it's doing.
/**
* First we will load all of this project's JavaScript dependencies which
* include Vue and Vue Resource. This gives a great starting point for
* building robust, powerful web applications using Vue and Laravel.
*/
require('./bootstrap');
/**
* Next, we will create a fresh Vue application instance and attach it to
* the page. Then, you may begin adding components to this application
* or customize the JavaScript scaffolding to fit your unique needs.
*/
Vue.component('example', require('./components/Example.vue'));
const app = new Vue({
el: '#app'
});
OK, so it looks like Laravel ships with a bootstrap.js
file out of the box—we'll check that out in a second. Then we're pulling in an example Vue component, which we'll also take a look at. And it's binding our component to an element on our page with the ID of app
.
Before we even look further, we can now assume that, if we compile this file and include it on our page, something like this would probably do something:
<html>
<head></head>
<body>
<div id="app">
<example></example>
</div>
<script src="/js/app.js"></script>
</body>
</html>
bootstrap.js
fileLet's figure out what it is actually going to do. First, we'll open up that bootstrap
file, which is resources/assets/js/bootstrap.js
:
window._ = require('lodash');
/**
* We'll load jQuery and the Bootstrap jQuery plugin which provides support
* for JavaScript based Bootstrap features such as modals and tabs. This
* code may be modified to fit the specific needs of your application.
*/
window.$ = window.jQuery = require('jquery');
require('bootstrap-sass');
/**
* Vue is a modern JavaScript library for building interactive web interfaces
* using reactive data binding and reusable components. Vue's API is clean
* and simple, leaving you to focus on building your next great project.
*/
window.Vue = require('vue');
require('vue-resource');
/**
* We'll register a HTTP interceptor to attach the "CSRF" header to each of
* the outgoing requests issued by this application. The CSRF middleware
* included with Laravel will automatically verify the header's value.
*/
Vue.http.interceptors.push((request, next) => {
request.headers.set('X-CSRF-TOKEN', Laravel.csrfToken);
next();
});
/**
* Echo exposes an expressive API for subscribing to channels and listening
* for events that are broadcast by Laravel. Echo and event broadcasting
* allows your team to easily build robust real-time web applications.
*/
// import Echo from "laravel-echo"
// window.Echo = new Echo({
// broadcaster: 'pusher',
// key: 'your-pusher-key'
// });
Alright, there's a lot more going on now! We've now pulled in jQuery, Bootstrap, Vue, and Vue-Resource. We're adding the CSRF token to the headers for Vue and Vue-Resource. And there's a placeholder to make it easy to start using Echo if we want.
Let's take a look at this Example component in resources/assets/js/components/Example.vue
:
<template>
<div class="container">
<div class="row">
<div class="col-md-8 col-md-offset-2">
<div class="panel panel-default">
<div class="panel-heading">Example Component</div>
<div class="panel-body">
I'm an example component!
</div>
</div>
</div>
</div>
</div>
</template>
<script>
export default {
mounted() {
console.log('Component mounted.')
}
}
</script>
This is a Vueify-style Vue component that we can use as a sample to make our own components. You'll know it's working if you see the "Example Component" content on your screen.
So! Where do we go from here? Let's install our dependencies and run Elixir and then check the page out. We'll use Yarn (but if you don't have it, just run npm install
instead):
yarn
gulp
gulp watch
How much work will we have to do to see if this actually works? Let's see what the 5.3 Blade templates look like. The default welcome.blade.php
file doesn't reference these files at all, but the auth scaffolded files do, so run php artisan make:auth
to publish them.
Now, we can take a look at our default resources/views/layouts.app.blade.php
file:
<html>
... (header stuff)
<script>
window.Laravel = <?php echo json_encode([
'csrfToken' => csrf_token(),
]); ?>
</script>
</head>
<body>
<div id="app">
.. (lots of content)
</div>
<!-- Scripts -->
<script src="/js/app.js"></script>
</body>
</html>
A few things of note here. First, the auth scaffolded files are pulling in /js/app.js
, so they'll all have access to our Vue instance and all the dependencies we bound. Second, you can see that there's a base div with an ID of app
, so that means we can use our Vue components anywhere within any of our templates and they'll be registered. And finally, there's a parent window.Laravel
JavaScript object where you can set any useful information; with this sample, you could pull the CSRF token in any JavaScript now by simply referencing Laravel.csrfToken
.
So. We've run Elixir, looked through all of our JavaScript files, and taken a look at the Blade templates that will be referencing them. Let's go see how it works!
Since you're using Valet and likely spun this up with Lambo (right?) you can now visit these routes directly in your browser. I started this project with lambo blogpost
, so I can now visit http://blogpost.dev/login
to see what the Auth scaffold looks like.
Everything looks like it's working fine, I guess, so it's time for us to actually test that our Vue components are working correctly. Open up resources/views/auth/login.blade.php
and add <example></example>
anywhere within the content
section.
Save, and refresh the page.
There you go! You now have a fully functioning Vue stack with Bootstrap and jQuery and a sample, functioning, Vueify-style Vue component. Boom. Ready to go with almost no work.
]]>You don't know how many employees your users will add, so you add a little JavaScript button that makes it possible to add more.
How do you name your fields?
You probably know this is the wrong way:
Person 1:
<label>First name</label>
<input name="first_name1">
<label>Last name</label>
<input name="last_name1">
<label>Email</label>
<input name="email1">
To pull those out in the backend will require string manipulation—and imagine parsing out the number from that string field name when some numbers have one digit, but then all of a sudden you have 10 employees and now some of the fields have two digits at the end instead. Fail. Don't do it.
Here's the more common suggestion: use the field name array syntax:
Person 1:
<label>First name</label>
<input name="first_name[]">
<label>Last name</label>
<input name="last_name[]">
<label>Email</label>
<input name="email[]">
Person 2:
<label>First name</label>
<input name="first_name[]">
<label>Last name</label>
<input name="last_name[]">
<label>Email</label>
<input name="email[]">
This seems like a great idea, and it is—but when you parse the input on the other end, you're probably expecting something like this:
person1 = ['Jim', 'Barber', 'jim@barber.com'];
person2 = ['Amira', 'Sayegh', 'amira@sayegh.com'];
Sadly, that's not what you get. Instead, you get this:
first_name = ['Jim', 'Amira'];
last_name = ['Barber', 'Sayegh'];
email_name = ['jim@barber.com', 'amira@sayegh.com'];
Parsing that together is not awful, but it can get really clumsy—especially as you add more fields.
Fear not! There is a better solution!
If you set your fields to be grouped by "children" of a parent field, and give each "child" a numeric index, they'll all returned grouped, and then they'll return the way you're expecting. So people
is our "parent field", people[1]
is our first "child", and people[1][first_name]
is that child's first property.
Person 1:
<label>First name</label>
<input name="people[1][first_name]">
<label>Last name</label>
<input name="people[1][last_name]">
<label>Email</label>
<input name="people[1][email]">
Person 2:
<label>First name</label>
<input name="people[2][first_name]">
<label>Last name</label>
<input name="people[2][last_name]">
<label>Email</label>
<input name="people[2][email]">
And take a look at what we get now:
people = [
[
'first_name' => 'Jim',
'last_name' => 'Barber',
'email' => 'jim@barber.com
],
[
'first_name' => 'Amira',
'last_name' => 'Sayegh',
'email' => 'amira@sayegh.com
]
]
Boom, baby.
Here's a quick bit of ES6 JavaScript to show one way you might want to do this:
<form method="post">
<div id="people-container">
<h3>Person 1:</h3>
<p>
<label>First name</label><br>
<input name="people[1][first_name]">
</p>
<p>
<label>Last name</label><br>
<input name="people[1][last_name]">
</p>
<p>
<label>Email</label><br>
<input name="people[1][email]">
</p>
<h3>Person 2:</h3>
<p>
<label>First name</label><br>
<input name="people[2][first_name]">
</p>
<p>
<label>Last name</label><br>
<input name="people[2][last_name]">
</p>
<p>
<label>Email</label><br>
<input name="people[2][email]">
</p>
</div>
<a href="javascript:;" id="add-new-person">Add new person</a>
<p>
<input type="submit">
</p>
</form>
<script>
let i = 3;
document.getElementById('add-new-person').onclick = function () {
let template = `
<h3>Person ${i}:</h3>
<p>
<label>First name</label><br>
<input name="people[${i}][first_name]">
</p>
<p>
<label>Last name</label><br>
<input name="people[${i}][last_name]">
</p>
<p>
<label>Email</label><br>
<input name="people[${i}][email]">
</p>`;
let container = document.getElementById('people-container');
let div = document.createElement('div');
div.innerHTML = template;
container.appendChild(div);
i++;
}
</script>
On CodePen:
See the Pen HTML form submission with multiple sub items by Matt Stauffer (@mattstauffer) on CodePen.
I remembered that I wanted to write this article as I was listening to a great Full-Stack Radio episode with Jonathan Reinink where they talk about forms for an hour. It's good stuff. Take a listen.
Also, Adam has written a little about this same problem on his blog—but he chose to solve it on the server side instead. Take a look: Cleaning up form input with transpose
]]>We had a little mixup where Amazon ran out of stock right on launch day, but it's now looking great:
As always, Amazon has the cheapest prices (at least in the U.S.) and can sell the print and eBook versions of the book.
O'Reilly also has a page up for it, where you can buy the print book or any variety of eBook formats:
Folks who pre-ordered are just now starting to get their print copies, starting in the U.K. and moving around the world, so I'm looking forward to finally hearing what everyone thinks.
I got my copy last week, and here are a few photos:
Yes, that's what 454 pages looks like!
Here are a few early reviews:
***** Great content for any level of Laravel developer, new or experienced!
This book has so many tidbits in it that helped expand my knowledge of Laravel. A huge amount of effort was put into making the content clear, understandable, and digestible. I recommend this for anyone getting started with Laravel or even as a reference for experienced users.
- Vince Mitchell
***** The best laravel book so far
I tried several books regarding laravel 5.3 in the last few weeks. And this is by far the best one if you are completely new to laravel like me.
The examples are short yet concise, easy to understand and the concepts behind them are explained very well. You learn a whole lot without getting overwhelmed with huge code examples.
While it's always best to try out all the examples in a programming book, you can read this one also without having a computer nearby and still get your head around the concepts in laravel.
I am glad I found this book!
- ronfrtz
***** Even useful for veteran Laravel devs
As a 4 years experienced developer with Laravel, I must say it was surprisingly helpful to learn some tips and tricks since we as veteran engineers think we know it all.
This shows how much effort Matt puts into learning the underlying codebase and thus sometimes giving more detailed and easy to learn information that isn't even in the Documentation (at least at the time of the Pre Release).
Matt is a natural teacher and has a great learning methodology.
- Andre Sardo
I'm overjoyed that this product of over a year of work is finally in people's hands. If you have it, please go rate and review it on O'Reilly or Amazon! And if you don't have it, there's no better time than now to go buy a copy.
And now... back to your regularly scheduled blog. Finally!
]]>In Laravel 5.3, we have another new feature for communicating with our users: Notifications.
Think about any message that you want to send to your users where you may not care about how they receive the message. A password reset notification, maybe, or a "you have new changes to review" notification, or "Someone added you as a friend." None of these are specifically better as emails; they may be just fine as SMS messages, Slack notifications, in-app popups, or myriad other means of notifications.
Laravel 5.3's Notification system makes it easy to set up a single class for each notification (e.g. "WorkoutAssigned") which describes how to notify users of the same message using many different communication mediums, and also how to choose which medium to use for each user.
As always, we'll use an Artisan command to create our notifications:
php artisan make:notification WorkoutAssigned
It'll create our file at app/Notifications/WorkoutAssigned.php
, and it will look like this:
<?php
namespace App\Notifications;
use Illuminate\Bus\Queueable;
use Illuminate\Notifications\Notification;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Notifications\Messages\MailMessage;
class WorkoutAssigned extends Notification
{
use Queueable;
/**
* Create a new notification instance.
*
* @return void
*/
public function __construct()
{
//
}
/**
* Get the notification's delivery channels.
*
* @param mixed $notifiable
* @return array
*/
public function via($notifiable)
{
return ['mail'];
}
/**
* Get the mail representation of the notification.
*
* @param mixed $notifiable
* @return \Illuminate\Notifications\Messages\MailMessage
*/
public function toMail($notifiable)
{
return (new MailMessage)
->line('The introduction to the notification.')
->action('Notification Action', 'https://laravel.com')
->line('Thank you for using our application!');
}
/**
* Get the array representation of the notification.
*
* @param mixed $notifiable
* @return array
*/
public function toArray($notifiable)
{
return [
//
];
}
}
Let's take a look at what we have here. First, the constructor, where we'll inject any relevant data.
public function __construct() {}
Next, we have the via()
method, which allows us to define which of the possible notification methods each individual instance will be sent using. Return an array of notification types here and the given notification will be sent using all of the provided notification methods.
public function via($notifiable)
{
return ['mail'];
}
For now, we'll just keep this hard-coded, but since this is a method, you can programatically define which channel names to use—for example, allowing each user to define their notification preferences.
Out of the box, your notification shows how to customize a specific notification channel with the toMail()
method. It's passed the "notifiable", which we'll cover in a second, and you build a mail message and return it.
public function toMail($notifiable)
{
return (new MailMessage)
->line('The introduction to the notification.')
->action('Notification Action', 'https://laravel.com')
->line('Thank you for using our application!');
}
Finally, the toArray()
method is the basic fallback that will be referenced by any channel that you don't specifically customize—for example, the database channel.
public function toArray($notifiable)
{
return [];
}
Let's tweak this class to make a bit more sense for our "Workout Assigned" notification:
...
class WorkoutAssigned extends Notification
{
use Queueable;
private $workout;
public function __construct($workout)
{
$this->workout = $workout;
}
public function via($notifiable)
{
return ['mail'];
}
public function toMail($notifiable)
{
return (new MailMessage)
->line("You've been assigned a new workout!")
->action('View workout', route('workouts.show', [$this->workout]))
->line("Let's get going!");
}
public function toArray($notifiable)
{
return [
'workout' => $this->workout->id
];
}
}
So we're expecting the instance to be constructed with a workout, so we can correctly notify our notifiable(s) of which workout was assigned to them.
Up until now I've been talking about notifying users. But technically any Eloquent model could be notifiable; it should just import the Illuminate\Notifications\Notifiable
trait. You may find yourself wanting to notify a Group, a Team, a List, or any other reasonable model that you might want to send notifications to.
Just note that certain notification channels expect certain information available on the notifiable. For example, the mail channel expects the model to have an "email" property so it knows which email address to send to. You can customize how to route a given model for a given channel by adding a method like this to your model:
...
class Group
{
use Notifiable;
public function routeNotificationForMail()
{
return $this->owner->email;
}
}
The structure is routeNotificationFor{CHANNELNAME}
, and you need to return, in this case, the email address to send to. Other notification channels will expect different things returned for their route methods.
There are two ways to send a notification. First, you can use the Notification façade:
Notification::send(User::first(), new WorkoutAssigned($workout));
The first parameter is who should be notified. You can either pass a single model instance, or you can pass a whole collection:
Notification::send(User::all(), new DowntimePlanned($date));
The second parameter is an instance of your notification.
Alternatively, you can use the notify()
method on your model that imports the Notifiable
trait (which the default User
class already does out of the box):
$user->notify(new WorkoutAssigned($workout));
Note: Before you send your first notification, go edit the new property named "name" in your
config/app.php
file; this will determine the name of your app which will be displayed in the header and footer of your emails.
Here's what the mail notification looks like by default with our Workout Assigned class above:
So, what channels are available other than mail
? Out of the box you'll get database
, broadcast
, nexmo
, and slack
, but you can look for more at the community-driven Laravel Notification Channels site.
Remember how I mentioned programmatically defining which notification channel to use for a user? Here's one way you might want to do it, from the docs:
public function via($notifiable)
{
return $notifiable->prefers_sms ? ['nexmo'] : ['mail', 'database'];
}
You could also build that logic into the user model itself:
// in notification
public function via($notifiable)
{
return $notifiable->preferredNotificationChannel();
}
// in the User class
public function preferredNotificationChannel()
{
return PresenceChecker::isOnline($this) ? ['broadcast'] : ['mail'];
}
We've already taken a look at the basics of how to send a mail notification, but there's a lot more customizing you can do.
You can customize the subject of your email (which is, by default, parsed from the name of your notification class—e.g. "WorkoutAssigned" would have a subject of "Workout Assigned") using the subject()
method:
public function toMail($notifiable)
{
return (new MailMessage)
->subject('You have been assigned a new workout!')
...
}
You can customize the greeting (which defaults to "Hello!") using the greeting()
method:
public function toMail($notifiable)
{
return (new MailMessage)
->greeting("Let's goooooooo!")
...
}
You can use the "error" template, which changes everything blue to red:
public function toMail($notifiable)
{
return (new MailMessage)
->error()
...
}
And finally, you can publish and customize the template used for the email:
php artisan vendor:publish --tag=laravel-notifications
The HTML & plain text templates will now be available in resources/views/vendor/notifications
.
The database notification channel stores each notification in a database table, expecting you to handle them in your application however you wish.
You can create a migration for this table by running php artisan notifications:table
.
If you don't specify a toDatabase()
method on your Notification, Laravel will use the toArray()
method to define the data to store for your notification. But you can also customize it; whatever you return from the toDatabase()
or toArray()
methods will be JSON-encoded in the data
database column.
// in your notification
public function toDatabase($notifiable)
{
return [
'trainee_id' => $notifiable->id,
'workout_id' => $this->workout->id
];
}
You can easily get these notifications via the notifications()
relationship that's added to your model with the Notifiable
trait. This includes some conveniences around "read" vs. "unread" notifications; each notification has a markAsRead()
method that you can use to update its read_at
property, and you can scope only to "unread" notifications using the unreadNotifications()
method on the model:
foreach ($user->notifications as $notification) {
// do stuff
$notification->markAsRead();
}
// later...
foreach ($user->unreadNotifications as $notification) {
// new!
}
If you're not yet familiar with Laravel's Event Broadcasting, you'll want to be in order to understand the broadcast channel; check out my blog post introducing Laravel's Event Broadcasting.
The broadcast notification channel will broadcast events with your notification's data payload to your Websocket client. It'll use the {notifiableClassNameDotNotated}.{id}
private channel for these notifications; this means notifying user 15 would broadcast to the private channel named App.User.15
.
Just like the other methods, broadcast notifications will default to getting their data structure from toArray()
unless you specifically define a toBroadcast()
method.
If you're using Laravel Echo, you can subscribe to a user's broadcast channel with code that might look a bit like this:
var userId = 15; // set elsewhere
Echo.private('App.User.' + userId)
.notification((notification) => {
console.log(notification);
});
The Nexmo channel makes it easy to send SMS notifications to your users. You'll need to set up your Nexmo credentials in config/services.php
under the nexmo
key, looking something like this:
'nexmo' => [
'key' => env('NEXMO_KEY'),
'secret' => env('NEXMO_SECRET'),
'sms_from' => '15558675309',
],
You'll need to define a toNexmo()
method that returns an instance of Illuminate\Notifications\Message\NexmoMessage
:
public function toNexmo($notifiable)
{
return (new NexmoMessage)
->content('Hey this is on your phone OMG');
}
Just like the email channel looks for an email
property on the notifiable, the Nexmo channel looks for a phone_number
property to send the message to. You can customize this with the routeNotificationForNexmo()
method:
// in the notification
public function routeNotificationForNexmo()
{
return $this->sms_number;
}
The Slack notification channel broadcasts your notifications to a Slack channel.
Note: In order to use Slack notifications, you'll need to bring Guzzle in via Composer:
composer require guzzlehttp/guzzle
First, go to your Slack account, "Apps and Integrations" section (https://{yourteam}.slack.com/apps). Choose the "Incoming Webhook" type and add a new configuration. You can specify which channel you want it to post to and more.
Grab the Webhook URL and head back to your Laravel app.
Your notifiable should implement a routeNotificationForSlack()
method that returns this webhook URL:
public function routeNotificationForSlack()
{
return $this->slack_webhook_url;
}
Now let's take a look at customizing the notification. You can read more in the docs, but here's a quick sample from the docs of what you can do with your toSlack()
method:
public function toSlack($notifiable)
{
$url = url('/invoices/' . $this->invoice->id);
return (new SlackMessage)
->success()
->content('One of your invoices has been paid!')
->attachment(function ($attachment) use ($url) {
$attachment->title('Invoice 1322', $url)
->fields([
'Title' => 'Server Expenses',
'Amount' => '$1,234',
'Via' => 'American Express',
'Was Overdue' => ':-1:',
]);
});
}
You can also keep it super simple; just generate a SlackMessage
and define at least the content:
public function toSlack($notifiable)
{
return (new SlackMessage)
->content('One of your invoices has been paid!');
}
Any notifications that implement the ShouldQueue
interface and import the Queueable
trait will be pushed onto your queue instead of sent synchronously. Since most of the notification channels require sending HTTP requests, queueing notifications is pretty highly recommended.
That's it!
This is great. Notifications are so simple and robust, you may no longer find yourself needing to use any other notification tool (mail, Slack SDK directly, etc.)—especially when you see how many custom notification channels the community has created. It's bonkers.
As always, with great power comes great responsibility; make sure you're being careful with your users' time and attention and you don't go overboard with the notifications.
So, go forth. Notify.
]]>It's been a tumultuous summer, what with Stauffer Child #2 arriving and me finally finishing Laravel: Up and Running (WOOP!).
Here’s li’l miss, cheesing a few hours after she was born. :) pic.twitter.com/AguGOgipys
— Matt Stauffer (@stauffermatt) July 31, 2016
A lot of folks have been asking me about timelines for the book, so let me share what we have here.
First, the most important thing: I finished writing the book a few months back, which was a great moment! I tweeted about it but can't find it anywhere. But it happened, I promise.
I promptly printed the entire book and spent the summer editing it by hand with a Sharpie:
Editing. Will be done with these edits by the time @laravelphp 5.3 releases; then I write up final features & print! pic.twitter.com/dauQqatou9
— Matt Stauffer (@stauffermatt) August 17, 2016
The big timeline constraint was Laravel 5.3. Over a year ago, when we set out to publish the book, we decided we'd release it immediately after Laravel 5.3 was announced. We expected that announcement in April; it came in August. So, that's why our original projections were off. Apologies!
Once Taylor announced Laravel 5.3, I quickly wrote up the new features in the book and submitted my final manuscript on Monday of this week. It's with the copyeditors right now. Yesterday I got a tentative publishing schedule from O'Reilly, which says **the final Ebook will be available end of October and the print edition will be available beginning of November**.
I get it. I want it now too! So here are a few things to help you in the waiting process.
First, anyone who pre-orders the Ebook from O'Reilly will get early access to the Ebook as it is today. Right now that's the first 12 chapters, but I've asked my editor to update that with the final (pre-copyediting) version of the book. Once she has that released, which I hope will be any day now, anyone who preorders between now and when the book is released will have access to the entire book, just without the last round of copy edits.
So despite "October/November" as the release date, you can get the book almost exactly as it will release by ordering the Ebook today. If you plan to eventually buy the Ebook, don't wait for the release; this pre-release version that's coming out any day now is 99.9% the book that will finally be released.
Freek Van der Herten was one of the folks who got an early access version of the book, and he wrote a post after reading it: "Things I learned from reading Laravel: Up and running". You can get some of my favorite tips from the book by taking a look at his post.
I also wrote a post about a few tips I learned when I was writing Laravel: Up and Running: "Things I didn't Know Laravel Could Do".
I've gotten a few questions often enough that I wanted to put them together answers to them here.
That's all I have for you for today! I'll be releasing a few more segments of the book soon to show you even more of what you have to look forward to. I can't wait to hear what y'all think!
And finally, if you haven't already, you can pre-order at O'Reilly today or you can sign up to be notified when the book is released.
]]>routes
directory and the changes that will make to the directory structure. But there's one other directory change coming in Laravel 5.3: many of the default folders will not show up until you run a command that requires them.
These directories come in each new version of Laravel prior to 5.3 but are often not used, so all new installs won't have them. Here's the list of folders that are going away:
Events
Jobs
Listeners
Policies
Notice a pattern? These are directories that contain a single, more advanced class structure that aren't universal to Laravel apps. Events, Jobs, Listeners, and Policies.
One note: If you were used to manually creating classes for each of those structures, you may find this change adds an extra step; but if so, it's really worth considering using Artisan to create them anyway. Artisan will handle creating these directories if they don't exist, so the change should be transparent in that context.
For a great visual of how this change will simplify a default Laravel install, take a look at this graphic from Laravel News:
(image from Laravel News post Laravel 5.3 changes the 'app' folder)
There are also new folders that won't show up on a new install but might show up after you use Artisan generators: app/Mail
for Mailables and app/Notifications
for Notifications.
Mail::send('emails.reminder', ['user' => $user], function ($m) use ($user) {
$m->from('hello@app.com', 'Your Application');
$m->to($user->email, $user->name)->subject('Your Reminder!');
});
I'm not saying it's awful—it's still so much cleaner than its competitors--but it's often confusing to figure out what goes in the closure and what doesn't, what the parameter order is, etc.
Mailables are PHP classes in Laravel 5.3 that represent a single email: "NewUserWelcome", or "PaymentReceipt". Now, similar to event and job dispatching, there's a simple "send" syntax, to which you'll pass an instance of the class that represents what you're "dispatching"; in this context, it's an email.
So now, that email above looks like this:
Mail::to($user)->send(new Reminder);
Let's take a look at that Reminder
class. First, create it with an Artisan command:
php artisan make:mail Reminder
It'll now live in app/Mail
directory. Let's take a look at how it looks out of the box:
<?php
namespace App\Mail;
use Illuminate\Bus\Queueable;
use Illuminate\Mail\Mailable;
use Illuminate\Queue\SerializesModels;
class Reminder extends Mailable
{
use Queueable, SerializesModels;
/**
* Create a new message instance.
*
* @return void
*/
public function __construct()
{
//
}
/**
* Build the message.
*
* @return $this
*/
public function build()
{
return $this->view('view.name');
}
}
All of the configuration you're used to doing in closures now takes place in the build()
method. So let's re-create that example email again:
public function build()
{
return $this->from('hello@app.com', 'Your Application')
->subject('Your Reminder!')
->view('emails.reminder');
}
Note: If you don't explicitly set the subject, Laravel will guess it from your class name. So if the class is named "ApplicationReminder", the default subject will be "Application Reminder".
Now, what if we want to pass some data in to the subject or into the view? That goes into the constructor:
Mail::to($user)->send(new Reminder($event));
class Reminder extends Mailable
{
public $event;
public function __construct($event)
{
$this->event = $event;
}
public function build()
{
return $this->from('hello@app.com', 'Your Application')
->subject('Event Reminder: ' . $this->event->name)
->view('emails.reminder');
}
}
Any public properties on our mailable class will be made available to the view, so we can now use $event
in the view:
// resources/views/emails/reminder.blade.php
<h1>{{ $event->name }} is coming up soon!</h1>
<p>Lorem ipsum.</p>
But what if you'd prefer specifying the data explicitly? You can do that—pass an array to a with()
call in build()
:
public function build()
{
return $this->from('hello@app.com', 'Your Application')
->subject('Event Reminder: ' . $this->event->name)
->view('emails.reminder')
->with(['title' => $this->event->name]);
}
As you can see, customizing the email itself happens in the build()
method and customizing who's getting it happens when we call the email. Let's take a look at cc
and bcc
:
Mail::to(User::find(1))
->cc(User::find(2))
->bcc(User::find(3))
->send(new Reminder);
// These methods also accept collections
Mail::to(Users::all())
->send(new Reminder);
There's a new text()
method to go along with the new view()
method. You can pass it the view you want used for the plaintext version of this email:
public function build()
{
return $this->view('emails.reminder')
->text('emails.reminder_plain');
}
One of the problems with sending mail in line with your application's execution is that it can often take a few seconds to send. Queues are the perfect answer to this. They're already easy with Laravel's pre-existing mail syntax, and it stays easy here: Just run Mail::queue
instead of Mail::send
.
Mail::to($user)->queue(new Reminder);
You can also use later
to specify when it should be sent:
$when = Carbon\Carbon::now()->addMinutes(15);
Mail::to($user)->later($when, new Reminder);
You'll probably get used to hearing this. Everything you can currently run within your mail closures, you can run within the build()
method. This includes attach()
. The first parameter is the path to the file, and the optional second parameter takes an array for customizing the details of the attached file.
public function build()
{
$this->view('emails.reminders')
->attach('/path/to/file', [
'as' => 'name.pdf',
'mime' => 'application/pdf',
]);
}
You can also use attachRaw
to attach raw data:
public function build()
{
$this->view('emails.reminders')
->attachRaw($this->pdf, 'name.pdf', [
'mime' => 'application/pdf',
]);
}
Mailables are not a drastic new feature. There's nothing you can do here that you couldn't already do with Laravel.
But it's one of those features you'll be glad for on a regular basis. I use mail a lot in my Laravel apps. I'm very grateful for this new system. It just makes sense.
The docs are online now if you want to read more.
]]>We're talking many routes, dozens of migrations, complicated configuration, and much more—even with amazing packages trying to simplify the situation as much as possible.
Laravel Passport is native OAuth 2 server for Laravel apps. Like Cashier and Scout, you'll bring it into your app with Composer. It uses the League OAuth2 Server package as a dependency but provides a simple, easy-to-learn and easy-to-implement syntax.
In Laravel 5.2, we got a new structure in our authentication system: multiple auth drivers. This means that, instead of there being a single auth system that is responsible for one app at a time, you can apply different auth systems to different routes (or in different environments). Out of the box, we got the same auth system we've always had and a new token-based auth system for APIs.
Laravel 5.2's token system was fine enough—but it wasn't really any more secure than normal password login. It was there, most importantly, to lay the groundwork for packages like Passport, which essentially adds a new "passport" driver you can use in your app to make certain routes OAuth2 authed.
Follow these steps on any Laravel 5.3 app and you'll be on your way to the easiest OAuth 2 server possible:
bash
composer require laravel/passport
config/app.php
, and add Laravel\Passport\PassportServiceProvider
to your providers list.bash
php artisan migrate
php artisan passport:install
, which will create encryption keys (local files) and personal/password grant tokens (inserted into your database)User
class and import the trait Laravel\Passport\HasApiTokens
Add the OAuth2 routes: go to AuthServiceProvider
and use Laravel\Passport\Passport
, then in the boot()
method run Passport::routes()
// AuthServiceProvider
public function boot()
{
$this->registerPolicies();
Passport::routes();
}
[Optional] Define at least one scope in the boot()
method of AuthServiceProvider
, after Passport::routes()
using Passport::tokensCan
// AuthServiceProvider
public function boot()
{
$this->registerPolicies();
Passport::routes();
Passport::tokensCan([
'conference' => 'Access your conference information'
]);
}
config/auth.php
, guards.api.driver
; change the api
guard to use passport
driver instead of token
php
// config/auth.php
return [
...
'guards' => [
...
'api' => [
'driver' => 'passport', // was previously 'token'
'provider' => 'users'
]
]
];
bash
php artisan make:auth
bash
php artisan vendor:publish --tag=passport-components
This is a lot, but basically we're importing that package, registering it with Laravel, setting our User
up to authenticate using it, adding a few routes for authentication and callbacks, and defining our first scope for users to have access via.
At this point, you're theoretically done. The server is installed and it works. That was fast! Your routes work, and you can create your clients and tokens either via Passport's Artisan commands or by building your own administrative tool on top of Passport's API. But before you make your decision, take a look at that API and the Vue components Passport provides out of the box.
Passport exposes a JSON API for your frontend to consume to let you manage your clients and tokens.
Out of the box, Passport comes with Vue components that show how you might want to interact with this API in your app. You could use these components and call it done, or you could write your own tool to interact with the API.
Out of the box, Passport comes with three Vue components: passport-clients
, which shows all of the clients you've registered; passport-authorized-clients
, which shows all of the clients you've given access to your account; and passport-personal-access-tokens
, which shows all of the "personal" tokens you've created for testing the API. We can register them in app.js
:
Vue.component(
'passport-clients',
require('./components/passport/Clients.vue')
);
Vue.component(
'passport-authorized-clients',
require('./components/passport/AuthorizedClients.vue')
);
Vue.component(
'passport-personal-access-tokens',
require('./components/passport/PersonalAccessTokens.vue')
);
const app = new Vue({
el: 'body'
});
And then use them in our HTML:
<!-- let people make clients -->
<passport-clients></passport-clients>
<!-- list of clients people have authorized to access our account -->
<passport-authorized-clients></passport-authorized-clients>
<!-- make it simple to generate a token right in the UI to play with -->
<passport-personal-access-tokens></passport-personal-access-tokens>
Let's walk through how they work and what they do.
We'll follow the example Taylor set in his talk at Laracon: We'll have a Passport-enabled server app at passport.dev
and a consumer app at consumer.dev
.
Here's what the admin panel (using the three components as shown above) will look on your Passport-enabled Laravel app:
Let's create a new client:
Once you create a client, the UI will return a "secret" and a "client ID". Go to your consuming client (another site or app; in this example, consumer.dev
) and put that key and ID into your configuration for your OAuth2 Client. Here's what it looked like when I created a client for "Consumer.dev":
Never worked with OAuth 2 before? In the particular type of authentication we're working with right now, the "authorization code" grant, the way that a client identifies themselves is with a "client ID" (like a primary key—sometimes just
1
) and a "secret" (like a password or a token). Each "client" is something like a web site that connects to this web site's data, or a mobile client, or something else that relies on this app and needs to authenticate with it. Passport also enables "password" grant and "personal" grant.
To test our app, we're going to build a consumer app just like Taylor did in his keynote. Remember, with any OAuth 2 situation, we have at least two apps: first, our "server" app that is using Passport, which is the app that will be authenticating users, and the "consumer" app that will be requesting the authenticating. Imagine that Twitter is your "server" and a twitter client you're writing is your "consumer"; the Twitter client wants their user to be able to authenticate with Twitter so the client can display their tweets.
Here's the routes file for our consumer.dev
client app, based on Taylor's Laracon demo. Remember, this is the app that is CONSUMING the OAuth authentication services, not the one providing it.
// routes/web.php
use Illuminate\Http\Request;
// First route that user visits on consumer app
Route::get('/', function () {
// Build the query parameter string to pass auth information to our request
$query = http_build_query([
'client_id' => 3,
'redirect_uri' => 'http://consumer.dev/callback',
'response_type' => 'code',
'scope' => 'conference'
]);
// Redirect the user to the OAuth authorization page
return redirect('http://passport.dev/oauth/authorize?' . $query);
});
// Route that user is forwarded back to after approving on server
Route::get('callback', function (Request $request) {
$http = new GuzzleHttp\Client;
$response = $http->post('http://passport.dev/oauth/token', [
'form_params' => [
'grant_type' => 'authorization_code',
'client_id' => 3, // from admin panel above
'client_secret' => 'yxOJrP0L9gqbXxoxoFl5I22IytFOpeCnUXD3aE0d', // from admin panel above
'redirect_uri' => 'http://consumer.dev/callback',
'code' => $request->code // Get code from the callback
]
]);
// echo the access token; normally we would save this in the DB
return json_decode((string) $response->getBody(), true)['access_token'];
});
When you visit http://consumer.dev/
it builds an OAuth request URL using your client ID and scope and providing a post-auth callback URL, and then it redirects you over to the Passport site (passport.dev
) for you to accept or reject the auth request.
When you authorize, Passport will then redirect you back to your provided callback URL—in this case, http://consumer.dev/callback
—and you'll now have access to your token. As you can see in the example above, you can do whatever you want with it—in this case we'll just echo it out to grab and use in a test we'll cover in a minute.
Assuming you created a client for your consumer app and then got a token for your user, let's now test out using it.
First, set up a route in your Passport-enabled app that we can be sure requires that the user is authenticated. The simplest option will be to return the user, which is actually already set up in your routes/api.php
file by default:
// routes/api.php
Route::get('/user', function (Request $request) {
return $request->user();
})->middleware('auth:api');
Next, open up your favorite REST client (Postman, Paw, or manually write a query in PHP or CURL) and make a request to that route we set up: http://passport.dev/api/user
. So that you get a useful response, be sure to set your Content-Type
header to application/json
and your Accept
header to application/json
.
With no authentication, you'll receive a 401
response showing you're not authenticated:
{
"error": "Unauthenticated."
}
Now, remember how the callback
route in our consumer app echoes the access token? Copy that token, and add a new header to your request named "Authorization". Set the value equal to "Bearer TOKENHERE", where TOKENHERE is your access token you copied from the callback
route.
Now, you should see your actual user:
{
"id": 1,
"name": "Matt Stauffer",
"email": "matt@mattstauffer.co",
"created_at": "2016-09-08 10:45:00",
"updated_at": "2016-09-08 10:45:00"
}
That's it! You have a fully functional OAuth 2 auth API!
There's a few more features, though. Let's take a look.
Passport offers a helpful tool that's not built into other OAuth 2 packages: the ability for your users to easily create tokens for themselves to use to test out your app. Your power users (imagine one of your users who is a developer and wants to consider your API for building an app against) don't have to create an entire second consumer app and register it for use with the authorization code grant just to test your API; instead they can create "personal tokens" just for testing purposes on their own accounts.
To use personal tokens, create a "personal client" once (you don't have to do this if you've already run php artisan passport:install
):
php artisan make passport:client --personal
Now you, and any of your users, can go to the Personal Access Tokens component and hit "Create New Token". At this point you're creating new tokens that have this single Personal Client listed as the client. You can delete these tokens just like you can revoke actual client tokens.
If you're unfamiliar with the idea of scopes, they're things you can define so that a consumer can define which type of access they're requesting to your app. This allows things like "user" access vs "full" access, etc. Each scope has a name and a description, and then within the app you can define their impact.
We've already covered how to define a scope above. Now let's see the simplest way to define their impact: Scope middleware.
There are two middleware that you can add to your app. You can give them any shortcut you want, but for now we'll call them "anyScope" and "allScopes".
Let's go to app/Http/Kernel.php
and add them to the $routeMiddleware
property:
// App\Http\Kernel
...
protected $routeMiddleware = [
...
// you can name these whatever you want
'anyScope' => \Laravel\Passport\Http\Middleware\CheckForAnyScope::class,
'allScopes' => \Laravel\Passport\Http\Middleware\CheckScopes::class,
];
Each of these middleware require you to pass one or more scope names to them. If you pass one or more scopes to "anyScopes", the user will have access to that route if they have granted access with any of the provided scopes. If you pass one or more scopes to "allScopes", the user will have access to that route if they have granted access to all of the provided scopes.
So, for example, if you want to limit users' access to routes based on whether they have the conference
scope:
Route::get('/whatever', function () {
// do stuff
})->middleware('anyScope:conference');
// Any of the given scopes
Route::get('/whatever', function () {
// do stuff
})->middleware('anyScope:conference,otherScope');
// All of the given scopes
Route::get('/whatever', function () {
// do stuff
})->middleware('allScopes:conference,otherScope');
If you have a frontend that's consuming this API, in the same app, you may not want to do the whole OAuth dance in that app. But you might still want the OAuth flow to still be available for external API consumers.
Passport offers a trick for your frontend—which has your user already authenticated via Laravel and sessions—to access your API and get around the OAuth flow.
Go to app/Http/Kernel.php
and add this new middleware to web
:
Laravel\Passport\Http\Middleware\CreateFreshApiToken::class,
This adds a JWT token as a cookie to anyone who's logged in using Laravel's traditional auth. Using the Synchronizer token pattern, Passport embeds a CSRF token into this cookie-held JWT token. Passport-auth'ed routes will first check for a traditional API token; if it doesn't exist, they'll secondarily check for one of these cookies. If this cookie exists, it will check for this embedded CSRF token to verify it.
So, in order to make all of your JavaScript requests authenticate to your Passport-powered API using this cookie, you'll need to add a request header to each AJAX request: set the header X-CSRF-TOKEN
equal to the CSRF token for that page.
If you're using Laravel's scaffold, that'll be available as Laravel.csrfToken
; if not, you can echo that value using the csrf_token()
helper.
I know this seems a bit complex, but here's the basics: If you want your local app (maybe a Vue or React SPA) to access your API, but don't feel like programming a whole complex OAuth flow into it, and you want to have OAuth available to external users, Passport makes this incredibly simple. Powerfully simple. For more information and an example of how to set this up in Vue, check out the docs.
I've programmed a lot of OAuth servers. It's a pain. I don't love it at all.
Passport is one of my favorite new features in Laravel in years. Not only does it simplify things I've always hated doing, it also adds a load of new features that I've never even thought to add to my apps. I love it, and I can't wait to use it.
]]>If you take a look at my pull request or theirs, you'll see that it's not a small task to integrate fulltext search into your site. Algolia has since released a free product called Algolia DocSearch that makes it easy to add an Algolia search widget to documentation pages. But for anything else, you're still stuck writing the integration yourself—that is, until now.
Scout is a driver-based fulltext search solution for Eloquent. Scout makes it easy to index and search the contents of your Eloquent models; currently it works with Algolia and ElasticSearch, but Taylor's asked for community contributions to other fulltext search services.
Scout is a separate Laravel package, like Cashier, that you'll need to pull in with Composer. We'll be adding traits to our models that indicate to Scout that it should listen to the events fired when instances of those models are modified and update the search index in response.
Take a look at this syntax for fulltext search, for finding any Review
with the word Llew
in it:
Review::search('Llew')->get();
Review::search('Llew')->paginate(20);
Review::search('Llew')->where('account_id', 2)->get();
All that with very little configuration. That's a beautiful thing.
First, pull in the package (once it's live, and on a Laravel 5.3 app):
composer require laravel/scout
Next, add the Scout service provider (Laravel\Scout\ScoutServiceProvider::class
) to the providers
section of config/app.php
.
We'll want to set up our Scout configuration. Run php artisan vendor:publish
and paste your Algolia credentials in config/scout.php
.
Finally, assuming you're using Algolia, install the Algolia SDK:
composer require algolia/algoliasearch-client-php
Now, go to your model (we'll use Review
, for a book review, for this example). Import the Laravel\Scout\Searchable
trait. You can define which properties are searchable using the toSearchableArray()
method (it defaults to mirroring toArray()
), and define the name of the model's index using the searchableAs()
method (it defaults to the table name).
Once we've done this, you can go check out your Algolia index page on their web site; when you add, update, or delete Review
records, you'll see your Algolia index update. Just like that.
We took a look at this already, but here's a refresh of how to search:
// Get all records from the Review that match the term "Llew"
Review::search('Llew')->get();
// Get all records from the Review that match the term "Llew",
// limited to 20 per page and reading the ?page query parameter,
// just like Eloquent pagination
Review::search('Llew')->paginate(20);
// Get all records from the Review that match the term "Llew"
// and have an account_id field set to 2
Review::search('Llew')->where('account_id', 2)->get();
What comes back from these searches? A Collection of Eloquent models, re-hydrated from your database. The IDs are stored in Algolia, which returns a list of matched IDs, and then Scout pulls the database records for those and returns them as Eloquent objects.
You don't have full access to the complexity of SQL where
commands, but it handles a solid basic framework for comparison checks like you can see in the code samples above.
You can probably guess that we're now making HTTP requests to Algolia on every request that modifies any database records. That can make things slow down very quickly, so you may find yourself wanting to queue these operations—which, thankfully, is simple.
In config/scout.php
, set queue
to true
so that these updates are set to be indexed asynchronously. We're now looking at "eventual consistency"; your database records will receive the updates immediately, and the updates to your search indexes will be queued and updated as fast as your queue worker allows.
Let's cover some special cases.
What if you want to perform a set of operations and avoid triggering the indexing in response? Just wrap them in the withoutSyncingToSearch()
method on your model:
Review::withoutSyncingToSearch(function () {
// make a bunch of reviews, e.g.
factory(Review::class, 10)->create();
});
Let's say you're now ready to perform the indexes, now that some bulk operation has been successfully performed. How?
Just add searchable()
to the end of any Eloquent query and it will index all of the records that were found in that query.
Review::all()->searchable();
You can also choose to scope the query to only those you want to index, but it's worth noting that the indexing will insert new records and update old records, so it's not bad to let it run over some records that may be indexed already.
This will also work on a relationship:
$user->reviews()->searchable();
You can also un-index any records with the same sort of query chaining, but just using unsearchable()
instead:
Review::where('sucky', true)->unsearchable();
There's an Artisan command for that.™
php artisan scout:import App\\Review
That'll chunk all of the Review
models and index them all.
That's it! With almost no work, you now have complete full-text search running on your Eloquent models.
]]>I'll be writing my usual longer, in-depth blog posts about each of the new 5.3 features that are releasing during Taylor's Laracon talk today, but I wanted to find a single place to write down my notes about the new features that Taylor is announcing for the first time today, so I figured, why not put it in a single blog post here?
This is just my notes from the live stream. I'll update this later with more info, and then will write full-length posts; this will just be casual notes.
Search/ElasticSearch driver; packaged separately like Cashier. Works best with Algolia but would love community support for other drivers.
Model is going to have a Searchable trait.
Indexes the toArray()
function on the model and puts it up in the search index.
Add ScoutServiceProvider
to config/app
and Searchable
trait to your model.
The trait hooks into Eloquent events. Listens to those events and updates your indexes in response.
Closure that allows you to override indexing:
Post::withoutSyncingToSearch(function () {
// make a bunch of posts, e.g.
factory(Post::class, 10)->create();
});
... then later update all of those:
// Could just scope down the query to only those which you haven't indexed yet if you want...
Post::all()->searchable();
Also could do: this on a relationship
$user->posts()->searchable();
It's smart enough to be like "upsert"; it updates any that are already there, and inserts any new ones.
Can also remove from search:
// didn't catch the syntax for this one, sorry! probably something like Post::where('a', 'b')->unsearchable();
These interactions feel slow—makes sense; these are HTTP requests going out!
So: in config/scout.php
set queue
to true
so that these updates are set to be synced async.
// not sure what this does or whether i wrote this syntax down right, feed was cutting out
php artisan scout:import App\Post
You can seearch.. something like:
Post::search('Alice')->get();
Post::search('Alice')->paginate(20);
Post::search('Alice')->where('account_id', 2)->get();
It can't do the full range of SQL where clauses, but it handles the basics.
Want to simplify mail, so creating mail objects:
Mail::to($user)->send(new DeploymentCompleted($server));
DeploymentCompleted
is a PHP class; it represents an email.
// mailable class
public function construct($server)
{
$this->server = $server;
}
public function build()
{
return $this->view('emails.whatever.viewname');
// second parameter is an optional array of specific data that you want to be available to view:
return $this->view('emails.whatever', ['explicit_data_passed' => 'abc']);
}
Any public properties on the mailable object are accessible in the view, so you don't have to explicitly pass any data.
Mail::to(User::find(1))
->cc(User::find(2))
->bcc(User::find(3))
->send(new etc.);
Mail::to(Users::all())
->send(new etc.);
Mail::to($user)->queue(new etc.);
All the same methods you have within your mail closure like attach
.
public function build()
{
return this->view()->subject()->attach();
}
Guesses subject from the class name if you don't set it explicitly. E.g. mailable class "DeploymentCompleted" gets auto subject "Deployment Completed".
Quick notifications. Password resets, quick links, etc.
Limited features. No file attachments, CCs, etc. This is not email.
Password reminder in 5.3 will use this out of the box.
$user->notify(new DeploymentCompleted($server));
class DeploymentCompleted
{
public function construct($server)
{
$this->server = $server;
}
public frunction via($notifiable)
{
// $notifiable might be a user.. but who knows, you might want to notify a server or a slack channel or something
// you could inspect user preferences here to decide which sort of notification they get
// return a list of notification "drivers"
return ['mail'];
}
public function message()
{
$this->line('You have a new deployment!')
->action('View Deployment', 'http://laravel.com')
->line('Check it out');
}
}
Notifiable trait.
Mail driver comes with a slick default template, responsive, etc. but you can also export/publish it into your app and customize it yourself.
Different states:
$this->line()->action()->line()->error();
Some drivers know how to differntiate states; some don't. For example, the error
state in the mail driver will get a big red button instead of a big blue button. The success()
state gets a green button.
New settings in config/app.php
: name
and logo
for notifications.
Covered above.
Table that holds these notifications. Polymorphic; columns for notifiable type, id, level, intro, outro, action text, action url, has been read or not. Laravel doesn't know how to check whether it's read or not, you handle that.
Just add 'database' to the via()
method and all of a sudden it's getting it; your calling code doesn't know or care which via
driver it's gonna use.
Add slack
to the via()
method. Some drivers require more info. Context:
routeNotificationForSlack()
method on the User (or whatever else is notifiable).
Convention is routeNotificationFor{drivernNameHere}
.
For Slack, that method should return Slack webhook URL: e.g. return $this->slack_webhook_url
.
Add nexmo
or sms
, hard to tell from Taylor's audio. Add your Nexmo api keys
Go to the notification class and add the ShouldQueue
trait. Now they're all queued. Boom goes the dynamite.
Full OAuth2 Server implementation in Laravel in like 5 minutes!!!!!!!!!
In Laravel 5.2, we got A) the idea of multiple auth drivers and B) the token-based authentication. Token-based auth works, it's fine, but it's more important as the ground layer for this.
Steps to use it:
config/app
, add Laravel\Passport\PassportServiceProvider
to your providers list.php artisan migrate
and it'll include the Passport migrations too.Laravel\Passport\HasApiTokens
AuthServiceProvider
and use Laravel\Passport\Passport
, then in the boot()
method run Passport::routes()
boot()
method of AuthServiceProvider
, after Passport::routes()
; e.g. Passport::tokensCan(['conference' => 'Access your conference information'])
config/auth.php
, guards.api.driver
; change the api
guard to use passport
driver instead of token
Passport exposes a JSON API for your frontend to consume to let you manage it.
Comes with Vue components by default to make it easy for you to manage them, if you want to use them. It's just a reference but you could use if you want.
Three default Vue components out of the box:
<!-- let people make clients -->
<passport-clients></passport-clients>
<!-- list of clients people have authorized to access our account -->
<passport-authorized-clients></passport-authorized-clients>
<!-- make it simple to generate a token right in the UI to play with -->
<passport-personal-access-tokens></passport-personal-access-tokens>
For these examples Taylor made an app at http://passport.dev/
that has Passport installed. This app is providing the OAuth API. Then another at http://example.dev/
that is a client, consuming it.
Look by default:
Creating a client:
Once you create a client, you get a secret and a client ID. Go to your consuming client (another site, etc.) and put that key and ID in there.
Showed a sample app that CONSUMES this API (not that provides it); lives at http://consumer.dev/
:
// routes/web.php
use Illuminate\Http\Request;
Route::get('/', function () {
$query = http_build_query([
'client_id' => 1,
'redirect_uri' => 'http://consumer.dev/callback',
'response_type' => 'code',
'scope' => 'conference'
]);
return redirect('http://passport.dev/oauth/authorize?' . $query);
});
Route::get('callback', function (Request $request) {
$http = new GuzzleHttp\Client;
$response = $http->post('http://passport.dev/oauth/token', [
'form_params' => [
'grant_type' => 'authorization-code',
'client_id' => 1, // from admin panel above
'client_secret' => 'abc', // from admin panel above
'redirect_uri' => 'http://consumer.dev/callback',
'code' => $request->code
]
]);
return json_decode((string) $response->getBody(), true)['access_token'];
});
When you visit http://consumer.dev/
it tries to authenticate, sending you over the Passport site; you get this screen:
When you authorize, takes you back to http://consumer.dev/callback
and you have access to your token now.
To prove, Taylor makes a route in his passport app that just returns the authenticated user, puts it in the routes/api.php
routes file. Calls it from Postman, pastes the JWT token from above into the Authorization
header and calls the page, and it just works. (Authorization: Bearer TOKENHERE
)
Easy to revoke applications in the UI:
Keeps saying: "This is not what your UI needs to look like; it's just a free reference application."
New in Passport that League package doesn't have: Want it to be easy to create a token in the UI to just play around with the API. Since every token is associated with a client (last one we made was associated with http://consumer.dev
), make a personal client: php artisan make passport:client --personal
. Then you can go to the Personal Access Tokens component and hit "Create New Token". Creates them with your app (passport.dev
) as the listed client.
Middlewares to limit users' route access based on your scopes. Add them (whatever they are) in the HTTP kernel. scope and scopes.
The scope
middleware authenticates you for a single scope; scopes
requires all defined scopes.
// add to Http\Kernel $route<iddleware property
// you can name it whatever you want
'scope' => \Laravel\Passport\Http\Middleware\CheckForAnyScope::class,
'scopes' => \Laravel\Passport\Http\Middleware\CheckScopes::class,
If you want to limit the user to only access a route if they have the conference
scope:
Route::get('/whatever', function () {
// do stuff
})->middleware('scope:conference');
Multiples can be comma separated; allows user through if they have any of the provided scopes: ->middleware('scope:conference,otherScope')
. If you want it to only let them through if they have all passed scopes, use scopes:
->middleware('scopes:conference,otherScope')`.
If you have a frontend that's consuming the API, you may not want to do the whole OAuth dance. But you might want the OAuth flow to still be available for external API users.
Trick for your frontend--which has your user already authenticated via Laravel and sessions--to access your API and get around the OAuth flow.
Go to HTTP\Kernel
and add new middleware to web
:
`Laravel\Passport\Http\Middleware\CreateFreshApiToken::class`,
This adds a JWT token as a cookie to anyone who's logged in. Uses "Synchronized token pattern" to embed the CSRF token into the JWT, and require a CSRF header if you sent that cookie, and they have to match. Some kinda magic.
Safe because other apps can't read your cookies so they can't get your CSRF token out of the JWT token. Boom. Can make API requests if logged in without worrying about OAuth tokens.
"My API doesn't have to be an after thought." Set the whole thing up in 15 minutes with demos.
Latest League package so they're JWT tokens.
]]>In Laravel 5.2 we temporarily saw two separate route groups in routes.php
, one for "web" and one for "API", but that went away mid-5.2.
What stuck around, though, was the idea of multiple middleware groups, and out of the box there's one for "web" routes and one for "API" routes.
The "web" group gets everything you'd expect your normal web users to need: sessions, cookies, CSRF protection, etc. The "API" group is lighter, and came by default with the "throttle" middleware, making the case for a stateless REST API.
In 5.3, the app/Http/routes.php
file has now moved to the root routes/
directory, and it's now split into two files: web.php
and api.php
. As you can probably guess, the routes in routes/web.php
are wrapped with the web
middleware group and the routes in routes/api.php
are wrapped with the api
middleware group.
There are a few benefits of this. First, we get the suggestion and easy implementation of the distinction between our web routes and our API routes. Second, it's now an application-level convention to have multiple routes files, which will likely free more developers up to feel comfortable organizing their routes file this way. And third, this moves the routes
directory out of app/
, which both makes the routes
directory more accessible to new users and makes app/
a fully PSR-4-autoloaded directory, which feels just a bit pure-r.
If you want to customize this or add your own separate routes files, check out App\Providers\RouteServiceProvider
for inspiration:
...
public function map()
{
$this->mapApiRoutes();
$this->mapWebRoutes();
//
}
protected function mapApiRoutes()
{
Route::group([
'middleware' => ['api', 'auth:api'],
'namespace' => $this->namespace,
'prefix' => 'api',
], function ($router) {
require base_path('routes/api.php');
});
}
protected function mapWebRoutes()
{
Route::group([
'namespace' => $this->namespace, 'middleware' => 'web',
], function ($router) {
require base_path('routes/web.php');
});
}
As you can see, there's an easy syntax for wrapping the results of any given routes file with a route group and then applying whatever prefixes or middleware or whatever else that you'd like.
That's it! Enjoy!
]]>However, for the sake of making the pagination library easier to extract for non-Laravel projects, Laravel 5.0 (or maybe even earlier?) introduced a much more complex—but more portable—system for pagination templates.
Thankfully, in Laravel 5.3, we're going to go back to how it always was: simple and easy.
If you're not familiar, here's a quick rundown of how it works to use pagination in Laravel.
// routes file
Route::get('tasks', function () {
return view('tasks.index')
->with('tasks', Task::paginate(10));
});
// resource/views/tasks/index.blade.php
@foreach ($tasks as $task)
<!-- echo the task or whatever -->
@endforeach
{{ $tasks->links() }}
By default, the paginate()
method on your Eloquent objects reads the query parameters of your request and detects which page you're on. So in this example, it'll read the ?page
query parameter and grab 10 records for that page. It'll pass those 10 in, and when we foreach
on the $tasks
variable, we'll just be looping over those 10.
But if you retrieve those 10 records using paginate()
instead of something like all()
, you get a new method available on your $tasks
object (or other Eloquent result) named links()
, and this method returns the view string appropriate for showing a list of pagination buttons:
<ul class="pagination">
<li class="disabled"><span>«</span></li>
<li class="active"><span>1</span></li>
<li><a href="http://53pagination.dev?page=2">2</a></li>
<li><a href="http://53pagination.dev?page=3">3</a></li>
<li><a href="http://53pagination.dev?page=2" rel="next">»</a></li>
</ul>
OK, so let's finally get to the dirt. How do you customize this template in 5.3?
By default, the template that is rendering this can be found in the Illuminate\Pagination
component: resources/views/bootstrap-3.blade.php
. This is what it looks like right now:
<ul class="pagination">
<!-- Previous Page Link -->
@if ($paginator->onFirstPage())
<li class="disabled"><span>«</span></li>
@else
<li><a href="{{ $paginator->previousPageUrl() }}" rel="prev">«</a></li>
@endif
<!-- Pagination Elements -->
@foreach ($elements as $element)
<!-- "Three Dots" Separator -->
@if (is_string($element))
<li class="disabled"><span>{{ $element }}</span></li>
@endif
<!-- Array Of Links -->
@if (is_array($element))
@foreach ($element as $page => $url)
@if ($page == $paginator->currentPage())
<li class="active"><span>{{ $page }}</span></li>
@else
<li><a href="{{ $url }}">{{ $page }}</a></li>
@endif
@endforeach
@endif
@endforeach
<!-- Next Page Link -->
@if ($paginator->hasMorePages())
<li><a href="{{ $paginator->nextPageUrl() }}" rel="next">»</a></li>
@else
<li class="disabled"><span>»</span></li>
@endif
</ul>
If you want to customize the pagination, you have two options: you can either publish the built-in view and edit it, or you can create a new file and manually link the Paginator to it.
Probably the easiest option is to run php artisan vendor:publish
. It'll publish the template to vendor/pagination
and you can just edit it there. This is the preferred option unless you have some specific customization needs.
If you'd like to instead create your own pagination file and manually link to it, you can do that too. Create a new file that's a duplicate of that file, and modify it for your needs. Save it somewhere in resources/views
; for now let's keep it simple and use resources/views/partials/pagination.blade.php
.
Now, let's register it. Run \Illuminate\Pagination\LengthAwarePaginator::defaultView('partials.paginator')
in the boot()
method of a service provider.
Note: If you'd like to customize which template is used just by a single paginator, you can pass the view name to the
links()
method:{{ $users->links('partials.paginator') }}
.
So, to get this entire thing to work, I took these steps:
php artisan vendor:publish
resources/views/vendor/pagination/default.blade.php
and customize it to my heart's desireThat's it!
]]>Note: These instructions show you how to customize the length-aware paginator, which is the most common. But if you're working with the simple paginator, you can customize that too. Just use the file named
simple-default
as your base instead ofdefault
.
dimensions
, and you can pass the following parameters to it:
min_width
: Images narrower than this pixel width will be rejectedmax_width
: Images wider than this pixel width will be rejectedmin_height
: Images shorter than this pixel height will be rejectedmax_height
: Images taller than this pixel height will be rejectedwidth
: Images not exactly this pixel width will be rejectedheight
: Images not exactly this pixel height will be rejectedratio
: Images not exactly this ratio (width/height, expressed as "width/height") will be rejectedYou can combine any rules that make sense together. Let's take a look at a few examples. First, let's set up our base install.
// routes file
Route::get('/', function () {
return view('form');
});
Route::post('/', 'ImageController@postImage');
<!--form.blade.php-->
<form method="POST" enctype="multipart/form-data">
<input type="file" name="avatar">
<input type="submit">
</form>
Now, let's make our ImageController
and take a look at a few sample validations.
// ImageController
public function postImage(Request $request)
{
$this->validate($request, [
'avatar' => 'dimensions:min_width=250,min_height=500'
]);
// or...
$this->validate($request, [
'avatar' => 'dimensions:min_width=500,max_width=1500'
]);
// or...
$this->validate($request, [
'avatar' => 'dimensions:width=100,height=100'
]);
// or...
// Ensures that the width of the image is 1.5x the height
$this->validate($request, [
'avatar' => 'dimensions:ratio=3/2'
]);
}
That's it! One less thing you have to manage yourself in your own code.
]]>Laravel 5.3 introduces a simple syntax for lookups and updates based on the value of specific keys in your JSON columns.
Let's assume we have a table with a JSON column:
...
class CreateContactsTable extends Migration
{
public function up()
{
Schema::create('contacts', function (Blueprint $table) {
$table->increments('id');
$table->string('name');
$table->json('meta');
$table->timestamps();
});
}
We'll imagine that each contact has some foundational information like their name, but some other properties are flexible. The best way to store them might be a JSON column—like our meta
column above.
We could imagine one contact (output to JSON for blog-post-readability) might look like this:
{
"id": 1,
"name": "Alphonse",
"meta": {
"wants_newsletter": true,
"favorite_color": "red"
}
}
So, let's get all of our contacts whose favorite color is red. As you can see below, we start with the column (meta
), followed by an arrow (->
), followed by the key name of the JSON property (favorite_color
).
$redLovers = DB::table('contacts')
->where('meta->favorite_color', 'red')
->get();
This means "look for every entry in the contacts
table which has a JSON object stored in meta
that has a key of favorite_color
that's set to red
."
What if we want to update Alphonse to no longer want the newsletter?
DB::table('contacts')
->where('id', 1)
->update(['meta->wants_newsletter' => false]);
What's great here is, even if the wants_newsletter
key wasn't previously set on this record, it will be now, and it'll be correctly set to false
.
See the power? We can query based on properties in the JSON column and we can update individual pieces of the JSON column without having to know, or care about, the others. Brilliant.
]]>Note: MariaDB does not have JSON columns, and PostgreSQL has JSON columns but this feature appears to not currently work on them. So consider this a MySQL 5.7+ feature for now.
filter()
or reject()
. For a quick refresh, this is how you might use both:
$vips = $people->filter(function ($person) {
return $person->status === 'vip';
});
$nonVips = $people->reject(function ($person) {
return $person->status === 'vip';
});
You might not know it, but there's also a where()
method that's pretty simple that gives you the same functionality:
$vips = $people->where('status', 'vip');
Prior to 5.3, this would check strictly (===
), just like in our examples above.
In 5.3, that same line is now a loose check (==
), but you can also customize the comparison operator. That makes all of this possible:
$nonVips = $people->where('status', '!==', 'vip');
$popularPosts = $posts->where('views', '>', 500);
$firstTimeUsers = $people->where('logins', '===', 1);
You can see the all of the possible operators at the time of writing this post here: Collection#l214-260
]]>As I was writing my book I noticed a pattern in the global helper functions like session()
and, in some ways, cookie()
. There are three primary functions that they can perform: get
a value, put
a value, or return an instance of their backing service.
For example:
session('abc', null)
gets the value of abc
, or an optional fallback of null
.session(['abc' => 'def'])
sets the value of abc
to def
.session()
returns an instance of the SessionManager
.The third option means you can use session()->all()
(or any other methods) just like you would Session::all()
.
I mentioned that it seems like there should be a cache()
helper, and before I could even think much more about it, Jeffrey (Way) had already written one up. So! Behold! The global cache()
helper, new in Laravel 5.3.
cache()
global helperLike session()
, the cache()
global helper can perform three primary functions: get
, put
, or return an instance of the backing service.
For example:
cache('abc', null)
gets the cached value of abc
, or an optional fallback of null
.cache(['abc' => 'def'], 5)
sets the value of abc
to def
, for the duration of 5
minutes.cache()
returns an instance of the CacheManager
.The third option means you can use cache()->forever()
(or any other methods) just like you would Cache::forever()
.
That's it! Enjoy!
]]>firstOrCreate
method.
If you've never used it before, you can pass an array of values to firstOrCreate
and it will look up whether a record exists with those properties. If so, it'll return that instance; if not, it'll create it and then return the created instance.
Here's an example:
$tag = Tag::firstOrCreate(['slug' => 'matts-favorites']);
This is good. It's very useful. But.
What if the tag with the slug matts-favorites
represents a tag with the label Matts favorites
?
$tag = Tag::firstOrCreate(['slug' => 'matts-favorites', 'label' => 'Matts Favorites']);
OK, that worked well. But now, imagine this scenario: you want to create a tag with slug of matts-favorites
and label of Matt's favorites
unless there's already a tag with slug matts-favorites
, in which case you just want that tag—even if it doesn't give you the label you want? Check it:
$tag = Tag::firstOrCreate(
['slug' => 'matts-favorites'],
['label' => "Matt's Favorites"]
);
We've specified that the Tag
model should look up a tag where slug
is matts-favorites
and return it if so. And if not, create a new tag with slug matts-favorites
and label Matt's Favorites
, and return that. Bam. Beautiful.
Laravel's Blade templating language provides something called "directives", which are custom tags—often control structures—that are prefaced with @
. If you've ever written templates with Blade, you're likely familiar with @if
, @foreach
, and so on.
In general, these control structure directive simply emulate their PHP analogs; for example, @if(condition)
is exactly the same as <?php if (condition):
.
$loop
variableIn 5.3, the @foreach
directive is getting a bit of a superpower, in the form of a new $loop
variable that will be available inside every @foreach
loop.
The $loop
variable is a stdClass
object that provides meta information about the loop you're currently inside. Take a look at the properties it exposes:
index
: the 0-based index of the current item in the loop; 0
would mean "first item"iteration
: the 1-based index of the current item in the loop; 1
would mean "first item"remaining
: how many items remain in the loop; if current item is first of three, would return 2
count
: the count of items in the loopfirst
: boolean; whether this is the first item in the looplast
: boolean; whether this is the last item in the loopdepth
: integer; how many "levels" deep this loop is; returns 1
for a loop, 2
for a loop within a loop, etc.parent
: if this loop is within another @foreach
loop, returns a reference to the $loop
variable for the parent loop item; otherwise returns null
Most of this is pretty self-explanatory; it means you can do something like this:
<ul>
@foreach ($pages as $page)
<li>{{ $page->title }} ({{ $loop->iteration }} / {{ $loop->count }})</li>
@endforeach
</ul>
But you also get a reference to parent $loop
variables when you have a loop-within-a-loop. You can use depth
to determine whether this is a loop-within-a-loop, and parent
to grab the $loop
variable of its parent. That opens up templating options like this:
<ul>
@foreach ($pages as $page)
<li>{{ $loop->iteration }}: {{ $page->title }}
@if ($page->hasChildren())
<ul>
@foreach ($page->children() as $child)
<li>{{ $loop->parent->iteration }}.{{ $loop->iteration }}:
{{ $child->title }}</li>
@endforeach
</ul>
@endif
</li>
@endforeach
</ul>
That's it!
]]>Echo comes in two parts: a series of improvements to Laravel's Event broadcasting system, and a new JavaScript package.
The backend components of Echo are baked into the Laravel core by default as of 5.3, and don't need to be imported (so it's different from something like Cashier). You could use these backend improvements with any JavaScript frontend, not just those using the Echo JavaScript library, and still see some significant improvements in ease-of-use for working with WebSockets. But they work even better when you use the Echo JavaScript library.
The Echo JavaScript library can be imported via NPM and then imported into your app's JavaScript. It's a layer of sugar on top of either Pusher JS (the JavaScript SDK for Pusher) or Socket.io (the JavaScript SDK many folks use on top of Redis WebSockets architectures).
Before we go any further, let's take a look at how you might use Echo, to see if it's even something you might be interested in.
WebSockets will be useful to you if you want to send messages to your users—whether those messages are notifications or even updates to the structure of a page's data—while the users stay on the same page. True, you could accomplish this with long-polling, or some sort of regularly scheduled JavaScript ping, but this has the potential to overwhelm your server pretty quickly. WebSockets are powerful, don't overload your servers, can scale as much as you'd like, and they're nearly instantaneous.
If you want to use WebSockets within a Laravel app, Echo provides a nice, clean syntax for simple features like public channels and complex features like authentication, authorization, and private and presence channels.
Important detail to know before hand: WebSockets implementations provide three types of channels: public, meaning anyone can subscribe; private, meaning the frontend has to authenticate the user against a backend and then assure that the user has permissions to subscribe to the given channel; and presence, which doesn't allow for sending messages and instead just notifies that a user is "present" in the channel or not.
Let's say you want to create a chat system, with multiple rooms. Ambitious, right? Well, we'll probably want to fire an Event every time a new chat message is received.
Note: You'll need to be familiar with Laravel's Event Broadcasting in order to get the most out of this article. I wrote a brief intro to Event broadcasting a while back that would be worth reading over first.
So, first, let's create the event:
php artisan make:event ChatMessageWasReceived
Open that class (app/Events/ChatMessageWasReceived.php
) and mark it as implementing the ShouldBroadcast
interface. For now, let's just have it broadcast to a public channel named "chat-room.1"
.
In 5.3, there's a new structure in the
broadcastOn()
method that frees you up from having to define private and presence channels by prefacing them with "private-" and "presence-". Now, you can just wrap the channel name in a simplePrivateChannel
orPresenceChannel
object. So, to broadcast to a public channel,return "chat-room.1";
. To broadcast to a private channel,return new PrivateChannel("chat-room.1");
. And to broadcast to a presence channel,return new PresenceChannel("chat-room.1");
.
You'll want to probably create a model and a migration for ChatMessage
, and give it a user_id
and a message
field.
php artisan make:model ChatMessage --migration
Here's a sample migration:
...
class CreateChatMessagesTable extends Migration
{
public function up()
{
Schema::create('chat_messages', function (Blueprint $table) {
$table->increments('id');
$table->string('message');
$table->integer('user_id')->unsigned();
$table->timestamps();
});
}
public function down()
{
Schema::drop('chat_messages');
}
}
And now let's update our event to inject a user and a chat message:
...
class ChatMessageWasReceived extends Event implements ShouldBroadcast
{
use InteractsWithSockets, SerializesModels;
public $chatMessage;
public $user;
public function __construct($chatMessage, $user)
{
$this->chatMessage = $chatMessage;
$this->user = $user;
}
public function broadcastOn()
{
return [
"chat-room.1"
];
}
}
And make our fields fillable in the model:
...
class ChatMessage extends Model
{
public $fillable = ['user_id', 'message'];
}
Now, create a way to trigger that event. For testing purposes, I often create an Artisan command to trigger my events. Let's try that.
php artisan make:command SendChatMessage
Open that file at app/Console/Commands/SendChatMessage.php
. Give it a signature that allows you to pass it a message, and then set its handle()
method to trigger our ChatMessageWasReceived
event with that message:
...
class SendChatMessage extends Command
{
protected $signature = 'chat:message {message}';
protected $description = 'Send chat message.';
public function handle()
{
// Fire off an event, just randomly grabbing the first user for now
$user = \App\User::first();
$message = \App\ChatMessage::create([
'user_id' => $user->id,
'message' => $this->argument('message')
]);
event(new \App\Events\ChatMessageWasReceived($message, $user));
}
}
Now open app/Console/Kernel.php
and add that command's class name to the $commands
property so it's registered as a viable Artisan command.
...
class Kernel extends ConsoleKernel
{
protected $commands = [
Commands\SendChatMessage::class,
];
...
Almost done! Finally, you need to go sign up for a Pusher account (Echo works with Redis and Socket.io too, but we're going to use Pusher for this example). Create a new app in your Pusher account and grab your key, secret, and App ID; then set those values in your .env
file as PUSHER_KEY
, PUSHER_SECRET
, and PUSHER_APP_ID
. Also, while you're in there, set the BROADCAST_DRIVER
to pusher
.
And, finally, require the Pusher library:
composer require pusher/pusher-php-server:~2.0
Now you can send events out to your Pusher account by running commands like this:
php artisan chat:message "Howdy everyone"
If everything worked correctly, you should be able to log into your Pusher debug console, trigger that event, and see this appear:
So you now have a simple system for pushing events to Pusher. Let's get to what Echo provides for you.
The simplest way to bring the Echo JavaScript library into your project is to import it with NPM and Elixir. So, let's import it and Pusher JS first:
# Install the basic Elixir requirements
npm install
# Install Pusher JS and Echo, and add to package.json
npm install --save laravel-echo pusher-js
Next, let's set up resouces/assets/js/app.js
to import it:
import Echo from "laravel-echo"
window.Echo = new Echo({
broadcaster: 'pusher',
key: 'your-pusher-key-here'
});
// @todo: Set up Echo bindings here
Finally, run gulp
or gulp watch
and be sure to link the resulting file into your HTML template, if you aren't already.
Tip: If you're trying this on a fresh Laravel install, run
php artisan make:auth
before you try to write all the HTML yourself. Later features will require you to have Laravel's authentication running anyway, so just make it happen now.
Echo needs access to your CSRF tokens; if you're using the Laravel auth bootstrap, it will make it available to echo as Laravel.csrfToken
. But if you're not, you can just make this available yourself by creating a csrf-token
meta tag:
<html>
<head>
...
<meta name="csrf-token" content="{{ csrf_token() }}">
...
</head>
<body>
...
<script src="js/app.js"></script>
</body>
</html>
Fantastic! Let's get to learning the syntax.
Let's go back to resources/assets/js/app.js
and listen to the public channel chat-room.1
that we are broadcasting our Event to, and log any messages that come in to our user's console:
import EchoLibrary from "laravel-echo"
window.Echo = new EchoLibrary({
broadcaster: 'pusher',
key: 'your-pusher-key-here'
});
Echo.channel('chat-room.1')
.listen('ChatMessageWasReceived', (e) => {
console.log(e.user, e.chatMessage);
});
We're telling Echo: subscribe to the public channel named chat-room.1
. Listen to an event named ChatMessageWasReceived
(and notice how Echo keeps you from having to enter the full event namespace). And when you get an event, pass it to this anonymous function and act on it.
And take a look at our console now:
Bam! With just a few lines of code, we have full access to the JSON-ified representation of our chat message and of our user. Brilliant! We can use this data not just to send users messages, but to update the in-memory data stores of your apps (VueJS, React, or whatever else)—allowing each WebSockets message to actually update the on-page display.
Let's move on to private and presence channels, which both require a new piece of complexity: authentication and authorization.
Let's make chat-room.1
private. First, we'll need to add private-
to the channel name. Edit the broadcastsOn()
method of our Laravel Event, ChatMessageWasReceived
, and set the channel name to be private-chat-room.1
. Or, to make it cleaner, you can pass the channel name to a new instance of PrivateChannel
, which does the same thing: return new PrivateChannel('chat-room.1');
.
Next, we'll use Echo.private()
in app.js
instead of Echo.channel()
.
Everything else can remain the same. However, if you try running the script, you'll notice that it doesn't work, and if you look at your console, you might see this error:
This is hinting at the next big feature Echo handles for you: authentication and authorization.
There are two pieces to the auth system. First, when you first open up your app, Echo wants to POST to your /broadcasting/auth
route. Once we set up the Laravel-side Echo tools, that route will associate your Pusher socket ID with your Laravel session ID. Now Laravel and Pusher know how to identify that any given Pusher socket connection is connected to a particular Laravel session.
The second piece of Echo's authentication and authorization features is that, when you want to access a protected resource (a private or presence channel), Echo will ping /broadcasting/auth
to see whether you are allowed to visit that channel. Because your socket ID will be associated with your Laravel session, we can write simple and clear ACL rules for this route; so, let's get started.
First, edit config/app.php
and find App\Providers\BroadcastServiceProvider::class,
, and un-comment it. Now open that file (app/Providers/BroadcastServiceProvider.php
). You should see something like this:
...
class BroadcastServiceProvider extends ServiceProvider
{
public function boot()
{
Broadcast::routes();
/*
* Authenticate the user's personal channel...
*/
Broadcast::channel('App.User.*', function ($user, $userId) {
return (int) $user->id === (int) $userId;
});
}
There are two important pieces here. First, Broadcast::routes()
registers the broadcast routes that Echo uses for authentication and authorization.
Second, Broadcast::channel()
calls make it possible for you to define access permissions for a channel or group of channels (using the *
character to match multiple channels). Laravel ships with a default channel associated with a specific user to show what it looks like to authorize limiting access to a single, currently-authenticated, user.
So we have a private channel named chat-room.1
. That suggests we're going to have multiple chat rooms (chat-room.2
, etc.) so let's define permissions here for all chat rooms:
Broadcast::channel('chat-room.*', function ($user, $chatroomId) {
// return whether or not this current user is authorized to visit this chat room
});
As you can see, the first value that's passed to the Closure is the current user and, if there any *
characters that could be matched, they'll be passed as additional parameters.
For the sake of this blog post, we'll just hand-code the authorization, but you would at this point want to create a model and migration for chat rooms, add a many-to-many relationship with the user, and then in this Closure check whether the current user is connected to this chat room or not; something like if ($user->chatrooms->contains($chatroomId))
. For now, let's just pretend:
Broadcast::channel('chat-room.*', function ($user, $chatroomId) {
if (true) { // Replace with real ACL
return true;
}
});
Go test it out and see what you get.
Having trouble? Remember, you need to have set your
app.js
to useecho.private()
instead ofecho.channel()
; you need to have updated your Event to broadcast on a private channel namedchat-room-1
instead of a public channel; you need to have updated yourBroadcastServiceProvider
. And you need to have logged in to your app. And you need to re-rungulp
, if you're not usinggulp watch
.
You should be able to see an empty console log, then you can trigger our Artisan command, and you should see your user and chatMessage there—just like before, but now it's restricted to authenticated and authorized users!
If you see the following message instead, that's fine! That means everything's working, and your system decided that you were not authorized for this channel. Go double-check all of your code, but this doesn't mean anything's broken— it just means you're not authorized.
Make sure to log in and then try again.
So, we now can decide in our backend which users have access to which chat rooms. When a user sends a message to a chat room (likely by sending an AJAX request to the server, but in our example, through an Artisan command) it will trigger a ChatMessageWasReceived
event which will then be broadcast, privately, to all of our users over WebSockets. What's next?
Let's say we want to set up an indicator on the side of our chat room showing who's there; maybe we want to play a noise when someone enters or leaves. There's a tool for that, and it's called a presence channel.
We'll need two things for this: a new Broadcast::channel()
permission definition and a new channel that's prefixed with presence-
(which we'll create by returning a PresenceChannel
instance from the event's broadcastOn
method). Interestingly, because channel auth definitions don't require the private-
and presence-
prefix, both private-chat-room.1
and presence-chat-room.1
will be referenced the same way in Broadcast::channel()
calls: chat-room.*
. That's actually fine, as long as you're OK with them having the same authorization rules. But I know that might be confusing, so for now we're going to name the channel a bit differently. Let's use presence-chat-room-presence.1
, which we'll auth as chat-room-presence.1
.
So, since we're just talking about presence, we don't need to tie this channel to an Event. Instead, we're just going to give app.js
directions to join us to the channel:
Echo.join('chat-room-presence.1')
.here(function (members) {
// runs when you join, and when anyone else leaves or joins
console.table(members);
});
We're "joining" a presence channel, and then providing a callback that will be triggered once when the user loads this page, and then once every time another member joins or leaves this presence channel. In addition to here
, which is called on all three events, you can add a listener for then
(which is called when the user joins), joining
(which is called when other users join the channel), and leaving
(which is called when other users leave the channel).
Echo.join('chat-room-presence.1')
.here(function (members) {
// runs when you join
console.table(members);
})
.joining(function (joiningMember, members) {
// runs when another member joins
console.table(joiningMember);
})
.leaving(function (leavingMember, members) {
// runs when another member leaves
console.table(leavingMember);
});
Next, let's set up the auth permissions for this channel in the BroadcastServiceProvider
:
Broadcast::channel('chat-room-presence.*', function ($user, $roomId) {
if (true) { // Replace with real authorization
return [
'id' => $user->id,
'name' => $user->name
];
}
});
As you can see, a presence channel doesn't just return true
if the user is authenticated; it needs to return an array of data that you want to make available about the user, for use in something like a "users online" sidebar.
Note: You might be wondering how I said earlier you could use the same
Broadcast::channel()
definition for both a private and a presence channel with similar names (private-chat-room.*
andpresence-chat-room.*
), since private channel Closures are expected to return a boolean and presence channel Closures are expected to return an array. However, returning an array still is "truth-y", and will be treated as a "yes," authorizing that user for access to that channel.
If everything got connected correctly, you should now be able to open up this app in two different browsers and see the updated members list logging to the console every time another user joins or leaves:
So you can now imagine how you might be able to ring a bell every time a user leaves or arrives, you could update your JavaScript in-memory list of members and bind that to an "online members" list on the page, and much more.
There's one last thing that Echo provides: what if you don't want the current user to get notifications? Maybe every time a new message comes into a chat room you're in, you want it to pop up a little message at the top of the screen temporarily. You probably don't want that to happen for the user that sent the message, right?
To exclude the current user from receiving the message, use the broadcast
helper to trigger your event instead of the event()
helper, and follow the call with toOthers()
:
broadcast(new \App\Events\ChatMessageWasReceived($message, $user))->toOthers();
Of course, this won't do anything with our sample Artisan command, but it will work if the Event is being triggered by a user of your app with an active session.
What we've done here looks pretty simple, so let me talk about why this is great.
First, remember that the messages that you're sending to your users are not just text—we're talking about JSON representations of your models. To get a sense for why this is great, take a look at how Taylor creates a task manager that keeps tasks up to date, on the page, in real time in his Laracasts video. This is powerful stuff!
Second, it's important to note that the most important benefits that Echo provides are completely invisible. While you may agree that this is powerful stuff and opens up a ton of opportunities, you might be tempted to say "but Echo is hardly doing anything!"
However, what you're not seeing is how much work you would to do to set up authentication, channel authorization, presence callbacks, and more if you weren't using Echo. Some of these features exist in Pusher JS and Socket.io, with varying levels of difficulty, but Echo makes them simpler and provides consistent conventions. Some of the features don't exist in the other libraries at all, or at least not as a single, simple feature. Echo takes what could be slow and painful with other socket libraries and makes it simple and easy.
]]>It turns out that there's a long road between "I have a book contract" and "I know everything there is to know in order to write this book."
It doesn't matter how much of an expert you feel like. It doesn't matter how much time you've spent learning and teaching. Across the board, every tech author I've talked to has described just how much they learned—had to learn—when they wrote a book.
I learned a lot in writing Laravel: Up and Running. And I want to share it with you.
I had two big fears when I first started writing Laravel: Up and Running.
First, I was afraid that I wouldn't have anything to offer other than what was already on my blog. That fear was quickly assuaged when I realized just how much there is to cover.
And second, I was afraid I was writing a book that would be helpful for beginners but useless for everyone else. Once again, this fear didn't last long.
There is as much content in this book that I had to learn as I wrote as there is content that I already knew. The amount of source diving I had to do, and test apps I had to write, to get this book right is incredible. For several chapters I spent more time coding and testing and reading source code than I did writing the actual book.
No blog post could contain all of the new things I learned from writing this book. I've been using—and teaching about—Laravel for years, and I was still shocked by how many tools and helpers and features I discovered.
Here are a few that stand out to me that I had never seen prior to writing the book.
Cookies are a little different than other similar tools—cache, session, etc.—in that PHP can't actually write them in the middle of a user request. Rather, they have to be returned along with the response.
That means that the Cookie Facade (and the global cookie()
helper) doesn't have anything like Cookie::put()
to allow you to directly set a cookie. Traditionally, we've created cookies and then attached them to the response using code like this:
Route::get('dashboard', function () {
$cookie = cookie('saw-dashboard', true, 15);
return view('dashboard')->withCookie($cookie);
});
But in writing the book I learned that there's a queue()
method on the Cookie Façade (and only on the Façade, not the helper or the injected class) that you can use prior to the response, and Laravel's AddQueuedCookiesToResponse
middleware will un-queue that cookie and attach it to your response after your route returns. So that makes this code possible:
Route::get('dashboard', function () {
Cookie::queue('saw-dashboard', true, 15);
return view('dashboard');
});
In this example it doesn't matter much, but in a longer controller method you might appreciate the ability to set your cookie earlier—or to even set it somewhere other than your controller, if you want.
I always knew you could attach files to your emails, but figured it was something like "get the underlying Swift object and then perform fifty lines of magic to make it happen."
Nope. It's stupidly simple.
Mail::send('emails.whitepaper', [], function ($m) {
$m->to('barasa@wangusi.ke');
$m->subject('Your whitepaper download');
$m->attach(storage_path('pdfs/whitepaper.pdf'));
});
It's also incredibly simple to embed an image directly into your email template:
// emails/has-image.blade.php
Here's that image we talked about sending:
<img src="{{ $message->embed(storage_path('embed.jpg')) }}">
Thanks!
Magic.
The docs show that you can chain a few types of Scheduler methods together to define the schedule your commands will run at. But it turns out you can chain any reasonable combination of times together.
This makes the Scheduler much more powerful that the docs let on. You can now do something like this:
// Run once an hour, weekdays, from 8-5
$schedule->command('do:thing')->weekdays()->hourly()->when(function () {
return date('H') >= 8 && date('H') <= 17;
});
You can also write more complex schedulers in other classes, and pass them in with a Closure:
$schedule->command('do:thing')->everyThirtyMinutes()->skip(function () {
return app('SkipDetector')->shouldSkip();
});
This seems like everyone should know it, but somehow I had never stumbled across it. In your tests, you can assert that, if you visit a particular route, the view in that route gets passed data. So you can write a route like this:
Route::get('test', function () {
return view('test')->with('foo', 'bar');
});
And then write this test:
public function test_view_gets_data()
{
$this->get('test');
$this->assertViewHas('foo'); // true
$this->assertViewHas('foo', 'bar'); // true
$this->assertViewHas('foo', 'baz'); // false
}
There is a lot more that I learned that I haven't covered here. Much of it will come through in the little details, which made it not as good of a fit for a blog post like this—but trust me, there's a lot.
If you want to check out the book itself, I have a free sample available on the site I've set up for the book: laravelupandrunning.com
Finally, if you already know you want the book, it's available for print and e-book pre-order on O'Reilly, and if you pre-order the e-book right now you'll get a free copy of the first 12 chapters, right out of my text editor, before they've seen the strike of an editor's pen.
Thanks for checking this out. I'm confident that, no matter who you are (except you, Taylor, and maybe you, Jeffrey), you'll learn something from this book. I've busted my butt to get this to be useful not just to people new to Laravel, but every member of the Laravel community, and I think y'all are going to love it.
]]>In case you missed it, I finished the first draft of my book Laravel: Up and Running this weekend. I'm ecstatic to be done, and ready to finish the updates and edits to get this book published!
First draft. Is. Done
— Matt Stauffer (@stauffermatt) May 30, 2016
My “250-page” book is currently sitting at 371 pages
Next up!! Edits, & making sure it’s up to date w/last 9 mos.
I took a look at the git history, and my first commit to the repo for the book was on July 10, 2015. That means the book will likely publish somewhere close to a year after I first started writing, and I finished writing the first draft in 325 days.
I've gotten a lot of questions about what the process was like working with O'Reilly, why I chose O'Reilly over self-publishing, and what syntax and system I'm using to write the book. I didn't want to spend too much time blogging about the book while I was writing it, but now that the first draft is done I can finally take a pause and write this up.
If you're not familiar with O'Reilly, they're the premier tech publisher. I currently have three O'Reilly books on my nightstand, dozens on my bookshelf, and my brothers and I grew up reading their "animal books" (a series of tech books with animals on the cover). You've probably seen them, even if you don't recognize the name:
So, when O'Reilly approached me about writing a book about Laravel, my first response was lifetime-geek excitement. O'Reilly? Wants me? It's like rappers who grew up learning how to rap by listening to artists like Jay-Z, only to do so well that they one day are asked to feature on one of his songs. It just feels good.
However, when you work with a traditional publisher, you only make a fraction of what you make when you self-publish. Your publisher sets your price (I spent a year writing 371 pages and it's currently listed at $30 on Amazon—that feels a bit like a kick in the gut), and while I don't want to reveal all of O'Reilly's secrets, let's just say I will be paid less than a quarter of the revenue. That sucks, right?
My friend Adam just released a book called Refactoring to Collections. You can pay anywhere from $39-179 for it, and Adam is paid nearly every cent (Gumroad takes a very small cut). Most of my friends who self-publish do so through Leanpub, which takes a small percentage of your revenue.
So we can see, there's a very big difference on pay-out. Why, then, would I choose to work with a traditional publisher?
There are four primary reasons why I chose to publish with O'Reilly.
First, I love O'Reilly, have learned from them since I was a kid, and love the feeling of telling my friends and family that I'm writing a book for O'Reilly. Even my wife, who's as non-technical as they get, said, "Wait, like those animal books you have on your nightstand??" Yes. Those.
The second reason is that O'Reilly has tooling, editors, promotional machines, and the experience of publishing and promoting that I don't have. If I self-published, my book wouldn't have gone to print. It wouldn't be in Amazon or physical bookstores. Instead, it would go out to my Twitter audience as an ebook and that's it. I would've facilitated finding my own editors (although I did that with O'Reilly anyway because I know some great editors), and either published through Leanpub or had to create my own PDF generator like Adam did. With O'Reilly I have access to decades of publishing wisdom and experience that I wouldn't have on my own.
The third reason is that O'Reilly has an audience that I don't have. They have respect and connections within academia, the enterprise, and a wide swath of the tech community that I don't have. And I don't just mean that I don't have those connections; Laravel, as a relatively young framework, doesn't have as many of those connections as I want it to.
Working with O'Reilly doesn't just legitimize my book, it also helps to legitimize Laravel, which is something I very much want to see happen. I would love for the old-school PHP heads who are considering trying out a framework to see an O'Reilly book about Laravel and consider that a validation—and a learning experience—that can get them there.
And finally, writing a book with O'Reilly may not get me as much money (and trust me, I'm not rich, I would love more money), but it provides other benefits to me as an author.
It helps my credibility as an author and a teacher: I've been published by a major publisher, not just Leanpub. It helps my connections: I am now in the O'Reilly fold, with a connection to other O'Reilly authors and O'Reilly conferences. And it helps my reputation. In ten years, will my consultancy still exist (I hope so)? Will the number of Twitter followers I have matter? Who knows. But in ten years, I predict that being an O'Reilly author will still be a signifier of accomplishment and ability. I don't have a computer science degree. I don't have many pieces of paper that show my abilities as a thinker and teacher and developer. But I will have this book, and you better believe I'll have a big picture of me smiling and holding it the second I get my first printed copy.
So, would I recommend publishing with O'Reilly? Absolutely. If you're asking the question, you are likely in a position where you should consider it, strongly.
If you have a large audience already and want to write a book to make a lot of money and don't mind doing a lot of the work yourself, you may not want to go with a traditional publisher. There are people like that, and they're often better suited to publish with Leanpub or something similar. My friend Adam? He self-published, and it was absolutely the right decision for that book.
But if you're not that exact type of person, I'd highly recommend trying the traditional publisher route if you can. Even someone who has all of those characteristics can benefit from publishing with someone like O'Reilly.
Not everyone can just walk up to O'Reilly and get a book published with them. There were two primary factors that led to my connection with them.
First, they had decided to write a book on a topic I love to teach. If you are deeply passionate about a subject O'Reilly (or your desired publisher) already has a book on, you may be out of luck.
And second, I had a ridiculous amount of written material available on the Internet on the topic they wanted a book about. They were able to see just how I might write a book about Laravel before they even contacted me.
The number one recommendation I would make if you want to write a book, traditional publisher or no: blog. Blog all the things. Blog all the time. Refine your voice. Get used to writing. And get your name out there and connected to the subject you would like to write about.
And here's my number one warning: just because you write a book, it doesn't mean it will sell. If you choose to self-publish, you are responsible for finding a book people are interested in, and marketing it. You could put six months of work into your book only to discover that only ten people want it. That would be awful. So do everything in your power to make sure that you're going to see the level of sales that you think validate your time spent.
I'll write a short post later this week about the process of working with O'Reilly--how they use git and AsciiDoc and what the planning and writing and editing processes are like.
And when we get closer to the publishing of the book, I'll gladly write up any other relevant pieces of information. If I can help you, future author, with my experiences, I would love it.
Love this article? You can pre-order the e-book now and get early access to the first few chapters, un-edited, with more to come.
]]>Mailgun and Sendgrid have been standby transactional email providers for a while, and there's also Amazon's SES, CampaignMonitor, and higher-cost-high-uptime premium Postmark.
But right when Mandrill announced their pricing change, a new transactional email provider came out of nowhere: SparkPost. They claim they've been around for years and power most of the Internet, but that's not entirely true. Rather, they are a new transactional service built on top of an old and powerful infrastructure, called "Momentum" by MessageSystems. But Momentum itself is definitely proven:
So let's walk through the process of signing up and moving Giscus, my app for notifying you of comments on your gists, from Mandrill to SparkPost.
Note: Most of the stuff in this article is easy to do. I'm writing it to give you a sense of what SparkPost is like in case you want to compare it with another provider, not because I think you need instructions for how to sign up. :)
First, let's go sign up.
100k free emails a month for the lifetime of the account? Yes please.
Now, I enter my domain. Sadly I don't have access to either of these email addresses, so let's see what else we can do.
I'll choose REST. We could technically use either, but I prefer using an API if possible. This gets me an API key, so I'll copy it down and then head over to the dashboard.
Note: If you want it done fast, or if you're using Laravel prior to 5.2, just use SMTP. You can copy the credentials, paste them into your
.env
file, update your app config to use SMTP, and then you're done.
Well ain't this pretty! This daily limit was 500 until I verified my email address, but now it's 10,000. What's next? Verify my sending domain. Let's do it.
Just like any other email provider, I'll need to set up DKIM and SPF records to verify ownership of the domain. Your experience may vary based on your DNS provider, but with DNSimple this is easy as pie. Once I set up the SPF and DKIM records, I was marked "ready to send."
So our SparkPost account is up and running. Let's now connect it to Giscus.
Like I mentioned earlier, the fastest option is SMTP. But I want to try the full API integration, so I'm going to upgrade Giscus to 5.2 using Laravel Shift and then that'll get me access to the SparkPost driver.
Make sure you're on Laravel 5.2.29 or later. I wasn't, so I upgraded, and now I need to add a sparkpost
array to my config/services.php
file:
'sparkpost' => [
'secret' => env('SPARKPOST_SECRET'),
],
I'll grab my API key that I stored earlier, head to my .env
file, and put it in there as SPARKPOST_SECRET
:
SPARKPOST_SECRET=1509812piu4nlkjadhfo98qwrw
Finally, I'll update .env
to show it that I'm using the sparkpost
driver:
MAIL_DRIVER=sparkpost
And that's it! Mail's now coming through via my new SparkPost driver.
One of the main reasons I wrote this post was to show folks what SparkPost feels like, so here are a few screenshots of the dashboard:
SparkPost also offers lists and templates, if you want to use their API directly. Check out the PHP-SparkPost package on Packagist.
One thing I've noticed is that something that I would often do on other providers (I can't remember who does and doesn't have it, sadly, but I know Mailgun does) is inspect the contents of an email that was sent. I'll often use it in debugging or helping folks with spam issues.
I haven't yet found out how to do it in SparkPost, and I'm worried it may not be possible. I've asked SparkPost on twitter, so we'll see what the response is. If they don't have it, that would be reason enough for me to use Mailgun instead. I hope I'm wrong, though, and it's just hiding in there somewhere!
That's all. It's very simple to set up and verify a SparkPost account; it's a proven platform; they have a generous free plan; and the dashboard is very easy to use, save the possible lack of the ability to inspect individual messages.
Have you had good or bad experiences with SparkPost? Let me know on twitter.
]]>However, Vue isn't just limited to simple components. Vue-resource makes AJAX easy, vue-router sets up single-page-app routing with almost no effort, and one day I'll learn Vuex, I promise.
I want to show you just how easy it is to use vue-router to create a single-page-app using Vue. And trust me: it is easy. If you've already created your first component using Vue, you're 90% of the way there.
As is often the case, I choose Laravel Elixir as my build tool.
You can use any build system that gives you access to your NPM packages, or you can even pull in the package manually (via CDN) or via Bower. In the end, get the vue-router
package installed somehow. If you're not going to use Elixir, just skip the next section.
If you are going to start a new project using Laravel Elixir, read the docs for basic installation instructions. Then add vue
and vue-router
:
npm install vue vue-router --save
Next, you'll want to add a Browserify task for your JavaScript file:
// gulpfile.js
elixir(function (mix) {
mix.browserify('app.js');
});
This now expects that you have a file at resources/assets/js/app.js
and will pass that file through Browserify and Babel and output it at public/js/app.js
. Create the resources version of the file, and then run gulp watch
to get it running.
In your primary JavaScript file, you'll want to pull in Vue and vue-router using whatever import system you have available:
var Vue = require('vue');
var VueRouter = require('vue-router');
Vue.use(VueRouter);
````
We pull in Vue and vue-router, and then we link them together. Now, let's write our app.
## Creating your application
Like in any Vue app, we need to create a base application. But unlike in other Vue apps, the core templating work we'll be doing is mapping certain routes to certain components. There's no new Vue concept like a "page"--each page is just a component, which may contain other components.
So, let's create our App and our router:
```javascript
var App = Vue.extend({});
var router = new VueRouter();
router.start(App, '#app');
That's it! This won't actually do anything, since we haven't mapped any routes, but we just defined an App, defined a router, bound them together, and then initialized the router.
Now that we have an app and a router, let's define a few routes:
var App = Vue.extend({});
var router = new VueRouter();
var Home = Vue.extend({
template: 'Welcome to the <b>home page</b>!';
});
var People = Vue.extend({
template: 'Look at all the people who work here!';
});
router.map({
'/': {
component: Home
},
'/people': {
component: People
}
});
router.start(App, '#app');
We've now defined two possible routes in our application, and mapped each with a component.
Let's create an HTML page to hold our router. Despite what you might think, this page doesn't need to be entirely empty. With vue-router, your app can contain some code that isn't switched out by components--for example, a nav section.
...
<div id="app">
<a v-link="{ path: '/' }"><h1>Our company</h1></a>
<ul class="navigation">
<li><a v-link="{ path: '/people' }">People</a></li>
</ul>
<router-view></router-view>
</div>
...
Let's assume this page has HTML header and wrapper tags, and also imports our scripts and dependencies in the header or footer somewhere.
We can see a few important pieces here. First, when we start our router in our script file, we bound it to '#app'
, so we needed to create a page element with the ID of "app" to bind our Vue app to.
Second, you can see the Vue syntax for links: using the v-link
property and passing it a JSON object. For now, we'll stick to { path: '/url-here' }
.
If the link you're currently visiting is "active"--meaning the user has that link open--the router will apply the class
v-link-active
to that link, which you can then style uniquely. You can also change that class or have it applied to a separate related element like a parentdiv
orli
--check out the docs to learn more.
Finally, we have a <router-view>
component, which is where the output of each page's component will go.
If you open that page up and everything loads correctly, you now have your first single-page app using vue-router!
You might notice that it's currently using "hashbang"-style navigation, where all of your routes are appended after
#!
. You can disable this, which we'll talk about later, but it will take a bit of server mumbo jumbo, just so you're prepared.
We defined some pretty simple routes above. Let's dig a little further into the sorts of routes you can define with vue-router.
You're probably going to want to define routes at some point that are more than just a static URL. We call URLs that can match multiple URLS "dynamic" URLs. Let's learn how to define one.
router.map({
'/people/:personId': {
component: {
template: 'Person ID is {{$route.params.personId}}'
}
}
});
We can learn a few things from this example. First, we can see that vue-router maps dynamic parameters using this syntax: :paramName
.
Second, we can see that vue-router allows you to define component objects inline, if you'd like.
And third, we can see we have access to a $route
object with a params
property that's an object of all of the matched dynamic URL parameters.
If you want to define a dynamic route segment that can match multiple segments, use a segment definition name that starts with *
instead of :
:
router.map({
'/people/*greedy' => {},
'/people/*greedy/baz' => {},
});
That first route definition will match /people/a
, /people/a/b
, /people/a/b/c/d/e/f
.
The second route definition will match /people/a/baz
, /people/a/b/c/d/e/f/g/baz
, etc.
And both will return a greedy
segment on the route: $route.params.greedy
, which is equal to the full string for that segment. /people/a/b/baz
would return { greedy: 'a/b' }
.
If you've used Laravel routes before, you'll be familiar with the idea of giving any given route a "name" that you can use to refer to it later. You can do this with vue-router as well:
router.map({
'/people/:personId': {
name: 'people.show',
component: People
}
});
You can then link to that route with v-link
:
<a v-link="{ name: 'people.show', params: { personId: 5 }}">Person 5</a>
We've seen how v-link
can replace normal links, but what if you need to trigger a navigation event in your JavaScript? router.go()
is your friend:
router.go({ name: 'people.show', params : { personId: 5 }});
You can also use router.replace()
, which functions the same except it doesn't generate a new history record in the browser.
When you instantiate the router object at the top of your JavaScript file, you can optionally pass it a few configuration properties.
false
, the system will use actual URLs (/people
) instead of hashbang URLs (/#!/people
). If you do this, you'll want to enable history and configure your server correctly./app
, set this property to /app
so that vue-router URLs are generated appropriately; /app/people
instead of just /people
.If you're using this app within a Laravel app, instead of configuring nginx or Apache to handle hashbang-less push state, you could configure your Laravel app to handle it; just set up a capture route that grabs all valid URLs and passes them the view that's outputting your Vue code.
Route::get('/{vue_capture?}', function () { return view('home'); })->where('vue_capture', '[\/\w\.-]*');
Every component in your app will have access to this.$route
, which is a "Route Context Object" and exposes properties that can be useful for getting information about your route. Your templates will also have access to this object as $route
.
$route.path
is equal to the absolute path; e.g. /people
$route.params
contains the key/value pairs of your dynamic sections; e.g. { personId: 5 }
$route.query
contains the key/value pairs of your query string; e.g. /people?sortBy=names
would return { sortBy : lastName }
$route.router
returns the vue-router instance$route.matched
returns route configuration objects for every matched segment in the current route$route.name
returns the name, if it has one, of the current routeYou can also pass custom data into your components by passing it into the route definition. For example, I wanted a certain section of my app to be admin-only, so I passed that as a flag into the definition, and was about to check it inside my component as $route.adminOnly
.
router.map({
'/secret-admin-panels': {
component: SecretDashboard,
adminOnly: true
}
});
It was possible for me to check the $route.adminOnly
property in my components, but that's something that would be better handled before users are even granted access to the component, using something like middleware. With vue-router, that's going to be route hooks, like beforeEach
:
// Sample means of checking user access
var MyUser = { admin: false };
router.beforeEach(function (transition) {
if (transition.to.adminOnly && ! MyUser.admin) {
transition.redirect('/login');
} else {
transition.next();
}
});
As you can see, we intercept the request before the component is rendered. We can allow it to move forward using transition.next()
, but we can also intercept the user and redirect or abort transition.abort()
.
There's also an afterEach()
hook.
A few quick notes before we're done.
First, <router-view>
can be passed props just like any other on-page component.
Second, Vue has an entire Transition Pipeline that you can use to define visual and functional transition behavior between pages.
Finally, this article isn't exhaustive; make sure to read the docs to learn more.
That's it for now. You can see how simple it is to set up your first single-page app using Vue and vue-router, and how smoothly the transition to single component to single-page app can be.
If you want to see a (somewhat) working example, my work-in-progress learning app Suggestive uses vue-router.
]]>Laravel Elixir is a build tool for the Laravel PHP framework, but it works just fine outside of Laravel. It's a wrapper around Gulp that makes it simple and painless to perform all the development tasks that are most common across the vast majority of web applications. [..] we get Babel (for ES2015) for free, and Vueify for cheap, so we can write simple, distinct components with almost no pain when it comes to putting them together.
I'm using Vueify on every project that uses Vue these days, and Elixir makes the whole process painless.
Setting up your first Vue.js site using Laravel Elixir and Vueify - Tighten.co Blog
]]>If you've never had the chance to work with one, the login system works like this: enter your email address on the login page, get emailed a login link, click the link, and now you're logged in. Access to your email address proves your identity without the need for a password.
Let's build one together.
This was a ton of fun to create and turned out pretty simple; check it out!
]]>I asked the author, "Could you re-PR this, without the bad commit?" No response.
I knew I could copy the code in a new branch of my own, but I wanted to give the original author attribution! Then I stopped and thought, "Can I do this in git?"
Turns out? You can grab only specific commits with a very simple git command: git cherry-pick
.
git cherry-pick
Git's cherry-pick
command allows you to "cherry pick" only the commits you want from another branch.
Here are the steps to using it:
git checkout master
.git cherry-pick super-long-hash-here
. That will pull just this commit into your current branch.git push origin master
So, I had a pull request introducing the log
component. I went to the pull request in GitHub and pulled the branch down (using the "use the command line" directions, but I could've also pulled down with the GitHub UI.)
On the command line, I then ran git checkout master
. I went to the GitHub UI, found the commit I wanted from the other branch, and grabbed its commit hash by clicking the little "copy" icon next to it in the commit list. Then I went back to the terminal and ran git cherry-pick long-hash-here-pasted-from-github
.
Finally, I pushed it up to GitHub with git push origin master
. Done! Finally, I closed the pull request manually with a link to the commit.
Here's the entire process:
git fetch origin
git checkout -b add-log-component origin/add-log-component
git checkout master
git cherry-pick COMMIT-HASH-HERE
git push origin master
You can also watch an animation of what the process looked like here:
In my previous post I showed how to disable it on Ubuntu, but since then, Adam Wathan has added a feature to Laravel that allows you to define whether you're using "strict" mode and also allows you to customize exactly which modes you'd like enabled--all in code.
If I can set a configuration option in code instead of on a server without suffering a performance hit, I'll always prefer it--it's one less thing I have to do every time I deploy to a new server. So, I'm totally glad for this new feature.
It's worth noting that you can use this feature not just to disable strict mode on 5.7; you can also enable it on 5.6. It might be wise to enable it on any app running on 5.6 so that you can prepare for 5.7, seeing if anything breaks when you turn on some of the stricter modes.
Before we talk about the feature, let's quickly cover what "strict mode" means.
MySQL has "modes", each of which enable or disable a certain behavior. For example, ERROR_FOR_DIVISION_BY_ZERO
is a mode that, you guessed it, throws an error when you divide by zero in a SQL division operation. Without this mode enabled, you'll just get a NULL
result silently.
"Strict mode," which is really just the list of modes 5.7 enables by default, is comprised of the following modes:
ONLY_FULL_GROUP_BY
STRICT_TRANS_TABLES
NO_ZERO_IN_DATE
NO_ZERO_DATE
ERROR_FOR_DIVISION_BY_ZERO
NO_AUTO_CREATE_USER
NO_ENGINE_SUBSTITUTION
You can learn more about these modes at the MySQL documentation.
Prior to 5.7, the only mode that was enabled was NO_ENGINE_SUBSTITUTION
.
With this new feature, Laravel now has the ability to do three things: Disable "strict" mode, returning to the <= 5.6 behavior; enable "strict" mode, setting it to the 5.7 behavior; or customizing exactly which modes are enabled.
These settings live in config/database.php
in the connections.mysql
section. For starters, let's look into enabling and disabling "strict" mode:
'connections' => [
'mysql' => [
// Behave like MySQL 5.6
'strict' => false,
// Behave like MySQL 5.7
'strict' => true,
]
]
But what if you're not satisfied with either 5.6's or 5.7's mode defaults? Just customize them yourself.
'connections' => [
'mysql' => [
// Ignore this key and rely on the strict key
'modes' => null,
// Explicitly disable all modes, overriding strict setting
'modes' => [],
// Explicitly enable specific modes, overriding strict setting
'modes' => [
'STRICT_TRANS_TABLES',
'ONLY_FULL_GROUP_BY',
],
]
]
You now have the ability to take total control over which modes are enabled on your app's MySQL server, in code, without touching a line of server configuration. Just like that.
By default, I'd recommend leaving them all on. But there may be occasions with particular use cases or old projects where you need to customize this list, and it's now possible--and even simple.
]]>I have, however, started a new, simpler podcast that only takes a few minutes to post: The Three-Minute Geek Show, powered by Briefs.fm. Each episode is less than three minutes and takes me about three minutes to post, so it's been easier to get content up there.
But as a part of this busy-ness, I'm interested in telling people more about the side projects that occupy my time. Maybe, in a time where I have even less time than usual to devote to my side projects, I could reach out to the community around me for help.
So, here's a quick rundown a few active projects I maintain. If you have time, I'd love for you to consider contributing to them!
Note: Many of these projects don't have good documentation of how to contribute or what even needs doing. I need as much help there as I do with actual feature contributions!
GitHub
Tech: Laravel
Abstract, Bio, and Photo Management Tool for Conference Speakers
3-minute intro to Symposium
GitHub
Tech: Laravel, Twilio
Phone Recording Service for People Pulled Over By Police
3-minute intro to PulledOver
GitHub
Tech: Laravel, VueJS
Listener Suggestion and Voting Service for Podcasts
3-minute intro to Suggestive
GitHub
Tech: Laravel
Easy Blogging With Gists
3-minute intro to Gistlog
GitHub
Tech: Laravel
Comment Notifications For Your Gists
3-minute intro to Giscus
GitHub
Tech: Laravel
Use Laravel Components Outside of Laravel
3-minute intro to Torch
GitHub
Tech: Laravel, Knockout (old, needs a ground-up re-write with a real API and with VueJS)
Track Who You Want To Meet and Who You Met at Conferences
We launched the blog on the v2 beta of Statamic, a flat-filed based CMS that's built on Laravel. My first post on the blog is a writeup of how our first few weeks of working with Statamic v2 beta has been.
Many folks who love programming in Laravel have found themselves needing to build simpler web sites, powered by data edited by backend administrative components that are similar across projects. They see the similarities between this and CMSes and therefore want to build a Laravel-based CMS.
... [Statamic's developers] know how to make content management systems. They understand the constraints and structures and flows and interactions. [...] With Statamic, we have that knowledge paired with Laravel. This is A Good Thing™.
Interested in a Laravel-based CMS? Check out the writeup: Statamic v2 Beta: First Impressions of a new Laravel-based Flat-file CMS
]]>The default authentication guard in Laravel prior to 5.2 (now named the web
guard) is your traditional web-based application authentication layer: username and password post to a controller, which checks the credentials and redirects if they are invalid; if valid, the user information gets saved to the session. Not all of those pieces are absolutely necessary but that's the general mindset.
But what if you want to have an API running in the same app, and it uses JSON web tokens (or some other stateless, non-session authentication mechanism)? In the past you'd have to jump through a lot of hoops to have multiple authentication drivers running at the same time.
In 5.2, not only is it simple to have multiple auth drivers running, it actually already works that way out of the box.
If you check config/auth.php
, you'll see two guards set out of the box: web
, which is the classic Laravel authentication layer, and api
, which is a stateless (no session memory) token-based driver.
Both, as you can see, connect to the same "provider".
Auth providers are also customizable. They're the definition of how the system should store and retrieve information about your users. Each is defined by an instance of
Illuminate\Contracts\Auth\UserProvider
.
'guards' => [
'web' => [
'driver' => 'session',
'provider' => 'users',
],
'api' => [
'driver' => 'token',
'provider' => 'users',
],
],
If you look up higher in config/auth.php
, you can see that the default Auth guard will be "web". That means any time you use Auth functions, middleware, or façades inside your application, they will default to the web
guard unless you explicitly specify otherwise.
So, if web
uses the classic session
driver, what's this new token
driver we're seeing powering the api
guard?
Jacob Bennett has written a fantastic post on that already: API Token Authentication in Laravel 5.2.
Check out his post to learn more about how it works, but here's the short of it:
api_token
column to your users
table. 60-character string, unique.auth
middleware in your route definition, use the auth:api
middleware.Auth::guard('api')->user()
to get your user instead of Auth::user()
.As you can see, we need to store an api_token
for each user, and every incoming request that's guarded by the token
-driven api
guard will require a query parameter named api_token
with a valid API token set to authenticate that user. And since it's stateless, every request will need to have this API token set; one successful request won't affect the next request.
If you're not familiar with token-based authentication, the consuming application (e.g. an iOS application) will have gotten, and saved, the token for the authenticating user prior to this request, so it will be creating its API calls using that known token as a part of the URL. For example, an iOS app might want to get a list of its user's friends; when the user first authenticated the application with your web site/API the app received a token and stored it. Now, it will generate requests using URLs like this:
http://yourapp.com/api/friends?api_token=STORED_TOKEN_HERE
As you can see in the token example above, there are two primary places we're going to be using drivers other than the default: in the auth guard middleware, and when we're using convenience features like Auth::check()
and Auth::user()
in our code.
You can choose which guard you're using to protect your routes by adding a colon and the guard name after auth
in the middleware key (e.g. Route::get('whatever', ['middleware' => 'auth:api'])
).
You can choose which guard you're calling manually in your code by making guard('guardname')
the first call of a fluent chain every time you use the Auth façade (e.g. Auth::guard('api')->check()
).
Creating your own guard is simple, beause each guard is just a key (web
, api
) that points to a specific configuration of a driver (session
, token
) and a provider (users
). They're configured, as mentioned above, in config/auth.php
:
'guards' => [
'web' => [
'driver' => 'session',
'provider' => 'users',
],
'api' => [
'driver' => 'token',
'provider' => 'users',
],
'matts-fancy-api-guard' => [
'driver' => 'token',
'provider' => 'users',
],
],
But as you can tell, that doesn't really do much unless you're changing the driver or the provider.
Creating your own driver is not quite as simple as creating your own guard. The docs have a spot about Creating your own auth driver, and you're essentially going to be creating your own implementation of Illuminate\Contracts\Auth\Guard
and then registering it as a driver in a service provider somewhere.
The docs also cover how to create your own user provider.
That's it. Enjoy.
]]>Laravel used to have a scaffold for this out of the box. It disappeared recently, to my great chagrin, but it's now back as an Artisan command: make:auth
.
What does it provide? Let's dig in.
We have a layout (resources/views/layouts/app.blade.php
) that is the core of this scaffold, and then a series of views that extend it:
Our public page is still routed via routes.php
:
Route::get('/', function () {
return view('welcome');
});
And we now have a HomeController
, which routes our dashboard:
class HomeController extends Controller
{
/**
* Show the application dashboard.
*
* @return Response
*/
public function index()
{
return view('home');
}
}
This is of course routed in routes.php
in the web
group. And notice that there's something else new there: The Route::auth()
method:
Route::group(['middleware' => 'web'], function () {
Route::auth();
Route::get('/home', 'HomeController@index');
});
The auth()
method is a shortcut to defining the following routes:
// Authentication Routes...
$this->get('login', 'Auth\AuthController@showLoginForm');
$this->post('login', 'Auth\AuthController@login');
$this->get('logout', 'Auth\AuthController@logout');
// Registration Routes...
$this->get('register', 'Auth\AuthController@showRegistrationForm');
$this->post('register', 'Auth\AuthController@register');
// Password Reset Routes...
$this->get('password/reset/{token?}', 'Auth\PasswordController@showResetForm');
$this->post('password/email', 'Auth\PasswordController@sendResetLinkEmail');
$this->post('password/reset', 'Auth\PasswordController@reset');
Now let's take a look at what we get in the browser:
As you can see we have Bootstrap CSS, a basic Bootstrap app layout, and helpful to our basic auth actions.
So what does this master layout look like?
We get FontAwesome, the Lato font, Bootstrap CSS, a basic hamburger-on-mobile responsive layout, jQuery, Bootstrap JS, and placeholders that are commented out for the default output CSS and JS files if you choose to use Elixir.
We also have a top nav that links us home, and links guests to either login or register, and links authenticated users to log out.
That's it! It's not anything complex, but it's 30-60 minutes of typing that you just saved on every app that needs it.
]]>In Laravel, you can fix this in code: edit your database.php
config file, and add a key of strict
with a value of false
. But if you're using a non-Laravel application (we've run into this with both CodeIgniter and CraftCMS applications), you won't have that option. Here's how to disable strict mode globally on any Laravel Forge server (and any other Ubuntu server).
Note: I'm not advocating for disabling strict mode. These new modes provide speed and consistency benefits that are worth keeping them enabled. I want to give you the options in case you need them, but my recommendation is to keep them enabled and to learn how to work with them.
MySQL actually looks five different places for configuration files, so you can make the change I'm about to recommend several places. It'll look in /etc/my.cnf
, /etc/mysql/my.cnf
, SYSCONFDIR/my.cnf
, $MYSQL_HOME/my.cnf
, and ~/my.cnf
. ~/.my.cnf
is user-specific, and the third and fourth options rely on specifics from your environment. So let's stick with one of the first two.
On a default Laravel Forge box, the default MySQL configuration will live in /etc/mysql/my.cnf
, so let's put our changes there. SSH into your server and use Vim or Pico to edit that file.
If you scroll down the file a bit, you'll find the [mysqld]
section. We're going to add a new key, sql_mode
. On MySQL 5.7, the default values for this key out of the box are:
STRICT_TRANS_TABLES,ONLY_FULL_GROUP_BY,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
The strict mode comes from STRICT_TRANS_TABLES
. So, let's overwrite the sql_mode
and set it to be the same as the default, but without strict mode.
[mysqld]
sql_mode=ONLY_FULL_GROUP_BY,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
That's it! Save the file, and restart MySQL. From the command line that would be sudo /etc/init.d/mysql restart
, or from the Laravel Forge interface, open the server, click the Restart Services icon at the bottom, and choose Restart MySQL.
]]>Note: If you're using CraftCMS or certain versions of Laravel, you'll likely also want to disable the "ONLY_FULL_GROUP_BY" option.
auth
. Maybe the API group gets a different auth
middleware, and it might get an API-specific rate limiter or something else.
Laravel 5.2 has introduced something called middleware groups, which are essentially a shortcut to applying a larger group of middleware, using a single key.
Note: Even if you don't want to use the middleware "shortcuts" aspect of middleware groups, you should read on, because this is a big change to Laravel's global middleware stack.
So remember my admin example above? We can now create an "admin" middleware group. Let's learn how.
You can define middleware groups in app\Http\Kernel.php
. There's a new property named $middlewareGroups
that's an array; each key is a name and each value is the corresponding middleware.
Out of the box, it comes with web
and api
:
protected $middlewareGroups = [
'web' => [
\App\Http\Middleware\EncryptCookies::class,
\Illuminate\Cookie\Middleware\AddQueuedCookiesToResponse::class,
\Illuminate\Session\Middleware\StartSession::class,
\Illuminate\View\Middleware\ShareErrorsFromSession::class,
\App\Http\Middleware\VerifyCsrfToken::class,
],
'api' => [
'throttle:60,1',
],
];
As you can see, the keys can reference either a class or a route-specific middleware shortcut like throttle
or auth
. Let's make an admin
group:
protected $middlewareGroups = [
'web' => [...],
'api' => [...],
'admin' => [
'web',
'auth',
]
];
We've defined that the admin
is a group that uses web
(another group) and auth
(a named route middleware). That's it!
You might notice that the middleware in web
are those that used to be applied to every route in Laravel 5.1 and before. That's a pretty big shift in thinking, so please take note of that: anything that's not given a web
middleware will not have cookies or session or CSRF functional.
That also means we have a lot more flexibility, though: it frees us up to have more stateless API layers that aren't giving us the convenience of cookies and sessions. We can get rid of most of the universal middleware—if you take a look, the only universal middleware in 5.2 is the "check for maintenance mode" middleware.
Note as well that any APIs that rely on cookies or sessions (or CSRF) will not work if they're stuck under this api
group, so if you have stateful APIs, you'll need to make some tweaks to this default api
group.
OK, so we know how to define a middleware group. How do we use it?
It'll be clear when you look at the default routes.php
in 5.2:
Route::get('/', function () {
return view('welcome');
});
Route::group(['middleware' => ['web']], function () {
//
});
As you can see, you use it just like any route middleware like auth
: just put the key either as the direct value of middleware
, or in an array that's the value of middleware
. So, here's our admin
middleware group in use:
Route::group(['middleware' => 'admin'], function () {
Route::get('dashboard', function () {
return view('dashboard');
});
});
That's it! Enjoy!
]]>Note: Later in Laravel 5.2, all routes in
routes.php
are now wrapped with theweb
middleware group by default. I'll try to write that up more later, but take a look at theRouteServiceProvider
to see how it's all working.
If you're not familiar with it, rate limiting is a tool—most often used in APIs—that limits the rate at which any individual requester can make requests.
That means, for example, if some bot is hitting a particularly expensive API route a thousand times a minute, your application won't crash, because after the nth try, they will instead get a 429: Too Many Attempts.
response back from the server.
Usually a well-written application that implements rate limiting will also pass back three headers that might not be on another application: X-RateLimit-Limit
, X-RateLimit-Remaining
, and Retry-After
(you'll only get Retry-After
if you've hit the limit). X-RateLimit-Limit
tells you the max number of requests you're allowed to make within this application's time period, X-RateLimit-Remaining
tells you how many requests you have left within this current time period, and Retry-After
tells you how many seconds to wait until you try again. (Retry-After
could also be a date instead of a number of seconds).
Note: Each API chooses the time span it's rate limiting for. GitHub is per hour, Twitter is per 15-minute segment. This Laravel middleware is per minute.
So, on to the new feature in Laravel 5.2. There's a new throttle
middleware that you can use. Let's take a look at our API group:
Route::group(['prefix' => 'api'], function () {
Route::get('people', function () {
return Person::all();
});
});
Let's apply a throttle to it. The default throttle limits it to 60 attempts per minute, and disables their access for a single minute if they hit the limit.
Route::group(['prefix' => 'api', 'middleware' => 'throttle'], function () {
Route::get('people', function () {
return Person::all();
});
});
If you make a request to this api/people
route, you'll now see the following lines in the response headers:
HTTP/1.1 200 OK
... other headers here ...
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 59
Remember, this response means:
A) This request succeeded (the status is 200
)
B) You can try this route 60 times per minute
C) You have 59 requests left for this minute
What response would we get if we went over the rate limit?
HTTP/1.1 429 Too Many Requests
... other headers here ...
Retry-After: 60
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 0
And the actual content of the response would be a string: "Too Many Attempts."
What if we tried again after 30 seconds?
HTTP/1.1 429 Too Many Requests
... other headers here ...
Retry-After: 30
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 0
Same response, except the Retry-After
timer that's telling us how long to wait has ticked down by 30 seconds.
throttle
middlewareLet's do a bit of customization. We want to limit it to 5 attempts per minute.
Route::group(['prefix' => 'api', 'middleware' => 'throttle:5'], function () {
Route::get('people', function () {
return Person::all();
});
});
And if we want to change it so that, if someone hits the limit, they can't try again for another 10 minutes?
Route::group(['prefix' => 'api', 'middleware' => 'throttle:5,10'], function () {
Route::get('people', function () {
return Person::all();
});
});
That's all there is to it!
You can see the code that's supporting this here: ThrottlesRequests.php
]]>Let's assume that a common pattern for binding a URL route is something like this:
Route::get('shoes/{id}', function ($id) {
$shoe = Shoe::findOrFail($id);
// Do stuff
});
This is something I do a lot. Wouldn't it be nice if you could drop the findOrFail
line and just teach Laravel's router that this route represents a Shoe? You can. In your route service provider, just teach the router: $router->model('shoe', 'App\Shoe');
That means, "any time I have a route parameter named shoe
, it's an ID representing an instance of App\Shoe
". This allows us to re-write the above code like this:
Route::get('shoes/{shoe}', function ($shoe) {
// Do stuff
});
In Laravel 5.2, it's even easier to use route model binding. Just typehint a parameter in the route Closure (or your controller method) and name the parameter the same thing as the route parameter, and it'll automatically treat it as a route model binding:
Route::get('shoes/{shoe}', function (App\Shoe $shoe) {
// Do stuff
});
That means you can now get the benefits of route model binding without having to define anything in the Route Service Provider. Easy!
That's it for implicit route model binding! Everything past this point is already around in 5.1.
These features are not new with 5.2, and therefore not specific to implicit route model binding, but they seem to be not commonly known, so I thought I would throw them in here.
If you want to customize the logic a route model binding uses to look up and return an instance of your model, you can pass a Closure as the second parameter of an explicit bind instead of passing a class name:
$router->bind('shoe', function ($value) {
return App\Shoe::where('slug', $value)->where('status', 'public')->first();
});
You can also customize the exceptions that the route model bindings throw (if they can't find an instance of that model) by passing a Closure as the third parameter:
$router->model('user', 'App\User', function () {
throw new NotFoundHttpException;
});
By default, Laravel assumes an Eloquent model should map to URL segments using its id
column. But what if you expect it to always map to a slug, like in my shoe custom binding logic example above?
Eloquent implements the Illuminate\Contracts\Routing\UrlRoutable
contract, which means every Eloquent object has a getRouteKeyName()
method on it that defines which column should be used to look it up from a URL. By default this is set to id
, but you can override that on any Eloquent model:
class Shoe extends Model
{
public function getRouteKeyName()
{
return 'slug';
}
}
Now, I can use explicit or implicit route model binding, and it will look up shoes where the slug
column is equal to my URL segment. Beautiful.
Form array validation simplifies the process of validating the somewhat abnormal shape of data HTML forms pass in when the array syntax is used. If you're not familiar with it, a common use case is when you allow a user to add multiple instances of the same type on one form.
Let's imagine you have a form where a user is adding a company, and as a part of it they can add as many employees to the company as they want. Each employee has a name and a title.
Here's our HTML; imagine that we have some JavaScript that creates a new "employee" div every time you press the "Add another employee" button so they user can add as many employees they want.
<form>
<label>Company Name</label>
<input type="text" name="name">
<h3>Employees</h3>
<div class="add-employee">
<label>Employee Name</label>
<input type="text" name="employee[1][name]">
<label>Employee Title</label>
<input type="text" name="employee[1][title]">
</div>
<div class="add-employee">
<label>Employee Name</label>
<input type="text" name="employee[2][name]">
<label>Employee Title</label>
<input type="text" name="employee[2][title]">
</div>
<a href="#" class="js-create-new-add-employee-box">Add another employee</a>
<input type="submit">
</form>
````
If you fill out that form and submit it, this is the shape of the `$_POST`:
array(2) { ["name"]=> string(10) "Acme, Inc." ["employee"]=> array(2) { [1]=> array(2) { ["name"]=> string(10) "Joe Schmoe" ["title"]=> string(11) "Head Person" } [2]=> array(2) { ["name"]=> string(18) "Conchita Albatross" ["title"]=> string(21) "Executive Head Person" } } }
As you can see, we get an `employee` "object". And it contains an array of the IDs that we passed in, with the key/value pairs of "fieldname" => "user provided field value".
> Note: It used to be common to just set every instance of the "employee name" field, for example, to be just `employee[][name]` without setting the ID manually. *Don't do this.* It'll make every aspect of working with the code more complex.
But how do we validate this? Prior to 5.2, it's a bunch of manual work. Now, Laravel understands this nesting structure and can validate against it uniquely.
## Writing form array validation rules
So, how do we do it? Let's take a look at a normal validator:
```php
// CompaniesController.php
public function store(Request $request)
{
$this->validate($request, [
'name' => 'required|string'
]);
// Save, etc.
}
And now let's add validation for our company employee fields:
// CompaniesController.php
public function store(Request $request)
{
$this->validate($request, [
'name' => 'required|string',
'employee.*.name' => 'required|string',
'employee.*.title' => 'string',
]);
// Save, etc.
}
Now we're validating every employee[*][name]
and employee[*][title]
uniquely, with pretty much no effort on our part. Beautiful.
You may have noticed that the shape of the validation is employee.*.name
, with an asterisk in the middle, which almost indicates that you could put something else there.
What if, instead of an asterisk to indicate "all", you put a specific number there? Turns out it'll only validate the entities with that ID. So if you put employee.1.name
in the validation array instead of employee.*.name
, only the employee with the ID of 1
will be validated according to those rules.
I don't know why or when you would do it, but you could actually set completely separate validation rules for each ID:
$this->validate($request, [
'employee.1.name' => 'required|string',
'employee.2.name' => 'integer', // Not sure *why* you would do this, but, it's possible
]);
That's it. Enjoy!
]]>One thing I noticed was that when I sorted by artist (or anything else), any items that start with a lowercase letter were sorted after all the items that started with an uppercase letter. I took a look to see what was happening, and it seems that VueJS' orderBy function is case sensitive by default, which mean uppercase letters get sorted first, and then lowercase. I Googled around and found a closed GitHub issue that indicated that this was the intended behavior for orderBy
, so I set out to write a case-insensitive orderBy filter.
orderBy
First, I knew that I wanted to mimic Vue's native orderBy
function as closely as possible. So, I needed to hunt it down. There are various ways to do that—grep, your IDE, whatever else—but eventually we land on vuejs/vue/src/filters/array-filters.js.
/**
* Filter filter for arrays
*
* @param {String} sortKey
* @param {String} reverse
*/
export function orderBy (arr, sortKey, reverse) {
arr = convertArray(arr)
if (!sortKey) {
return arr
}
var order = (reverse && reverse < 0) ? -1 : 1
// sort on a copy to avoid mutating original array
return arr.slice().sort(function (a, b) {
if (sortKey !== '$key') {
if (isObject(a) && '$value' in a) a = a.$value
if (isObject(b) && '$value' in b) b = b.$value
}
a = isObject(a) ? getPath(a, sortKey) : a
b = isObject(b) ? getPath(b, sortKey) : b
return a === b ? 0 : a > b ? order : -order
})
}
What's this doing? At its core, it's duplicating the array and sorting it by pulling out the values of the provided key and comparing it.
You may have noticed that we have a few non-native functions in use: convertArray
, isObject
, and getPath
.
So, we know the structure of our new filter. Where do we put it?
Vue makes it simple to add a custom filter. Here's the structure, which you can place in whatever file you're using to do your core Vue bindings:
Vue.filter('reverse', function (value) {
return value.split('').reverse().join('');
});
Let's try an example where we do something to an array. Notice we want to duplicate the array with .slice()
so we're not manipulating the original array.
Vue.filter('uppercaseArray', function (array) {
return array.map(function (item) {
return item.toUppercase();
});
});
We mapped over each item in the duplicated array, assumed it was a string, uppercased it, and then returned the mapped array. That means we can now, anywhere in our app, use this filter:
<tr v-for="item in items | uppercaseArray"> ... etc.
Note: In this example, we didn't use
slice()
before operating on the array. Why? Because, as wonderful folks pointed out to me on Twitter,map()
in JavaScript already creates a duplicate, so you don't needslice()
like you do withsort()
.
Great. Now, let's write our own. Let's take orderBy and add a few lines to make it case insensitive.
Vue.filter('caseInsensitiveOrderBy', function (arr, sortKey, reverse) {
arr = convertArray(arr)
if (!sortKey) {
return arr
}
var order = (reverse && reverse < 0) ? -1 : 1
// sort on a copy to avoid mutating original array
return arr.slice().sort(function (a, b) {
if (sortKey !== '$key') {
if (isObject(a) && '$value' in a) a = a.$value
if (isObject(b) && '$value' in b) b = b.$value
}
a = isObject(a) ? getPath(a, sortKey) : a
b = isObject(b) ? getPath(b, sortKey) : b
// Our new lines
a = a.toLowerCase()
b = b.toLowerCase()
return a === b ? 0 : a > b ? order : -order
})
});
As you can see, I just added the lower-casing at the end, nothing else. What happens?
ERROR. Uncaught ReferenceError: convertArray is not defined. Well, crap.
Turns out all those custom functions aren't just sitting around for you—you have to find where they're available. I was able to find all of them except convertArray
(I found it but I think it may be an entirely private method) so let's update it with the place Vue exposes each (getPath
I found with Evan's help on GitHub, and isObject
I found by trial and error).
Vue.filter('caseInsensitiveOrderBy', function (arr, sortKey, reverse) {
// arr = convertArray(arr)
if (!sortKey) {
return arr
}
var order = (reverse && reverse < 0) ? -1 : 1
// sort on a copy to avoid mutating original array
return arr.slice().sort(function (a, b) {
if (sortKey !== '$key') {
if (Vue.util.isObject(a) && '$value' in a) a = a.$value
if (Vue.util.isObject(b) && '$value' in b) b = b.$value
}
a = Vue.util.isObject(a) ? Vue.parsers.path.getPath(a, sortKey) : a
b = Vue.util.isObject(b) ? Vue.parsers.path.getPath(b, sortKey) : b
a = a.toLowerCase()
b = b.toLowerCase()
return a === b ? 0 : a > b ? order : -order
})
});
As you can see, some of those core Vue methods are exposed via objects like Vue.util
and Vue.parsers
.
Now, let's just take a look at convertArray
to see if we care. Turns out it's an alias for _postProcess
:
_postProcess: function _postProcess(value) {
if (isArray(value)) {
return value;
}
... etc
}
Well, check it out! This isn't perfect, but if we're passing in an array, we just get the array back. So, while I'll still keep looking at how to bring in convertArray
properly, we can drop it safely for any use of this new function that is, indeed, getting an array passed in.
And that's it. We now have a functional caseInsensitiveOrderBy
filter.
<tr v-for="item in items | caseInsensitiveOrderBy title"> ... etc.
Good. To. Go.
]]>But I also think there's a place for people to live code as they go, both as experts and also as learners. That's why I recorded and released Rapid Application Development - From Idea to Prototype in 1:45 with Laravel in February.
Now, I'm taking it a step further. I have no experience with Vue.js but I want to learn it. So I'm taking every chance I get to live-code on Twitch (I'm mattstauffer) as I'm learning Vue.
That means every time I'm writing code, I'm trying to accomplish a task that I don't know how to accomplish. It means doing things wrong, fumbling, and lots of confusion and typos and Googling. I'm hoping that me "learning out loud" will help others learn along with me.
If you can't join live on Twitch, I'm exporting all the videos to a YouTube playlist, which I've embedded below. Subscribe to my channel or just bookmark this page to be kept up.
(Note that, to see all the videos in the playlist below, you need to click the little hamburger icon at the upper left hand corner of the embed).
Vue.doSomethingOrOther({
onething: function () {
},
otherThing: function () {
},
etcetera: 'etcetera'
});
On the podcast I mentioned my undying love for the Revealing Module Pattern and promised an example, so here goes.
I first learned about the Revealing Module Pattern through Addy Osmani's book Learning JavaScript Design Patterns.
Let's take a quick example to show why Revealing Module is great. Let's presume we want an Analytics
object. We want to be able to use it throughout our JavaScript to make calls to Google Analytics, but we want a simpler syntax.
var Analytics = {};
Let's give it two methods, pageView
and action
.
var Analytics = {
pageView: function () {
GoogleAnalytics.prepSomethingOrOther();
GoogleAnalytics.pushOrSomething('pageView');
},
action: function (key) {
GoogleAnalytics.prepSomethingOrOther();
GoogleAnalytics.pushOrSomething('action', key);
}
};
Well, take a look at that—it's a bit of repeated code! If only we could have private methods, we could extract some sort of `pushOrSomething' method:
var Analytics = {
pushOrSomething: function () {
GoogleAnalytics.prepSomethingOrOther();
// Use function.apply to pass the called parameters along
GoogleAnalytics.pushOrSomething.apply(this, arguments);
},
pageView: function () {
this.pushOrSomething('pageView');
},
action: function (key) {
this.pushOrSomething('action', key);
}
};
This looks good, but our big problem here is that we've now exposed Analytics.pushOrSomething()
to the public. Additionally, we haven't hit it quite yet, but when we start building this object out, we'll run into Taylor's pain point of the constantly-growing comma-separated list.
The Revealing Module Pattern relies on a concept call the Self-Executing Anonymous Function. Let's take a look, first, at an anonymous function:
var AnalyticsGenerator = function() {
return {};
};
var Analytics = AnalyticsGenerator();
Great, so we have an un-named function that returns something IF we run it. But that's an awkward syntax, especially if we only expect to run this once. If only we could call the function as soon as we define it...
var Analytics = (function() {
return {};
})();
By George, we've done it! We just wrapped the function in parentheses, and then added a second set of parentheses afterwards to indicate it should be executed. Now Analytics
is defined as the result of this function's execution, which in this case is just an empty object.
Finally. The pattern. Check out our previous example, but now Revealing-Moduled:
var Analytics = (function () {
var _pushOrSomething = function () {
GoogleAnalytics.prepSomethingOrOther();
// Use function.apply to pass the called parameters along
GoogleAnalytics.pushOrSomething.apply(this, arguments);
};
var pageView = function () {
_pushOrSomething('pageView');
};
var action = function (key) {
_pushOrSomething('action', key);
};
return {
pageView: pageView,
action: action
};
})();
Notice that we've updated the calls within pageView
and action
to reference the function name without this, and we've prefaced the "private" function pushOrSomething
with an underscore, just as a reminder to keep it private.
At the end, we've defined what we want to return, and anything that's not in that return object is "private" and can't be called by the public. This also works for properties, and you can even do all sorts of procedural work, if you want:
var Analytics = (function () {
var variableOrWhatever = 42;
variableOrWhatever *= 1.0;
var _pushOrSomething = function () {
GoogleAnalytics.initialize(variableOrWhatever);
GoogleAnalytics.somethingOrOther();
// Use function.apply to pass the called parameters along
GoogleAnalytics.pushOrSomething.apply(this, arguments);
};
var pageView = function () {
_pushOrSomething('pageView');
};
var action = function (key) {
_pushOrSomething('action', key);
};
return {
variableOrWhatever: variableOrWhatever,
pageView: pageView,
action: action
};
})();
If you want to see this running, and see what happens when you try to call a "private" method, check out this JSBin. Or, just try the code above and see what happens when you run Analytics._pushOrSomething
(hint: your browser won't like it).
The sky's the limit, kids. Now anywhere you need to generate a JavaScript object you have much more freedom to quickly create private methods and run procedural nasty prep code in the midst of creating it.
A lot of the reason we need this sort of stuff is because JavaScript is based on Prototypes, not Classes & Objects. That's slowly been changing over time, and ES6 has made a huge difference in this aspect. So if you're writing ES6, you'll find less use for this pattern—but I'd still suggest keeping it in your toolbelt.
]]>phpunit.xml
file and set them as entries in the <php>
block:
<php>
<env name="APP_ENV" value="testing"/>
<env name="CACHE_DRIVER" value="array"/>
<env name="SESSION_DRIVER" value="array"/>
<env name="QUEUE_DRIVER" value="sync"/>
<env name="DB_DATABASE" value=":memory:"/>
<env name="DB_CONNECTION" value="sqlite"/>
<env name="TWILIO_FROM_NUMBER" value="+15005550006"/>
</php>
But what if you find yourself needing to exclude these values from version control?
I'm getting back to work on PulledOver, which uses Twilio as its foundation. I wanted to write some tests for my TwilioClient class, which directly connects to Twilio's API, so I was using Twilio's Test Credentials to send fake calls to the API and examine their responses.
But that means I have a Twilio SID and Token that I'm not supposed to expose to the public, right? I even asked around on Twitter:
#geek
When you’re working with @twilio Test credentials, does it matter if you commit them to version control? Would be easier if I could…
— Matt Stauffer (@stauffermatt) November 3, 2015
I got quite a few responses, and one from the CEO of Twilio (!):
@stauffermatt @twilio Best not to check them in. We may rate-limit API requests w/ test credentials, and if the whole world is using yours..
— Jeff Lawson (@jeffiel) November 3, 2015
OK, so I need to exclude them. But how? I scratched my head a few times, dropped a question in the company Slack, and went to sleep. I woke up this morning and the ever-resourceful Keith Damiani had an answer for me: use Dotenv (which Laravel uses to load .env
) to load a .env.test
file in Laravel's TestCase base class. DUH. Here's how you do it:
.env.test
in LaravelFirst, create a .env.test.example
file and fill it with placeholders for whichever keys you want:
TWILIO_ACCOUNT_SID=fillmein
TWILIO_ACCOUNT_TOKEN=fillmein
Next, copy .env.test.example
to .env.test
and fill in the actual values.
Add .env.test
to your .gitignore
file.
Finally, add these line to tests/TestCase.php
's createApplication
method, just below $app = require __DIR__.'/../bootstrap/app.php';
:
Note: Dotenv has changed their syntax multiple times recently, so I'll show three different versions here:
// Old Dotenv
if (file_exists(dirname(__DIR__) . '/.env.test')) {
Dotenv::load(dirname(__DIR__), '.env.test');
}
// Medium-old Dotenv
if (file_exists(dirname(__DIR__) . '/.env.test')) {
(new \Dotenv\Dotenv(dirname(__DIR__), '.env.test'))->load();
}
// New Dotenv
if (file_exists(dirname(__DIR__) . '/.env.test')) {
(\Dotenv\Dotenv::createImmutable(dirname(__DIR__), '.env.test'))->load();
}
That's it! Your .env.test
environment variables are now pulled into any test you run that extends TestCase
, but those values are safely kept out of version control.
private function skipIfTravis()
{
if (getenv('TRAVIS') === true) {
$this->markTestSkipped('This test should not run if on Travis.');
}
}
Then in a test, I would use it like this:
public function test_it_can_do_something_that_wont_work_on_travis()
{
$this->skipIfTravis();
// Do stuff...
}
This worked, but it didn't feel right. I remembered that there was a @requires
annotation in PHPUnit that works natively to allow you to skip a test under a certain version of PHP or with certain extensions disabled, so I set out to write my own custom @requires
block.
Note: Of course, I could've just made up a custom annotation named
@skipIfTravis
or something. The syntax may have been cleaner. But I was primarily interested in learning—how do PHPUnit annotations work? What does it look like to extend a pre-existing annotation? How do you not just check for the annotation, but also check its value? I'll show you what I found and then you can run willy-nilly with your own naming schemes.
The only article I could find that referenced this concept was Creating Your Own PHPUnit @requires Annotations, which got me 90% of the way there, but with a syntax I didn't particularly love.
As you can see in the example below that I copied from their site, we're extending PHPUnit's checkRequirements()
method, inspecting the current annotation's requires
block, and then testing our conditions;
protected function checkRequirements() {
parent::checkRequirements();
$annotations = $this->getAnnotations();
foreach ( array( 'class', 'method' ) as $depth ) {
if ( empty( $annotations[ $depth ]['requires'] ) ) {
continue;
}
$requires = array_flip( $annotations[ $depth ]['requires'] );
if ( isset( $requires['WordPress multisite'] ) && ! is_multisite() ) {
$this->markTestSkipped( 'Multisite must be enabled.' );
} else if ( isset( $requires['WordPress !multisite'] ) && is_multisite() ) {
$this->markTestSkipped( 'Multisite must not be enabled.' );
}
}
}
So, I adapted this for my needs. I figured, if I wanted to require that the code was not running on Travis, then I'd have to add a possible value of !Travis
to the @requires
annotation. It already feels a bit smelly, that we're requiring a negative, but let's just roll with it for now.
As you can see in the code below, we're running the parent method, grabbing all the annotations (which are grouped by class annotations and method annotations), checking for an annotation named @requires
, and looking for a value of !Travis
. If so, we're checking if the TRAVIS
environment variable is true, and if so, we're skipping the test.
protected function checkRequirements()
{
parent::checkRequirements();
$annotations = $this->getAnnotations();
foreach (['class', 'method'] as $depth) {
if (empty($annotations[$depth]['requires'])) {
continue;
}
$requires = array_flip($annotations[$depth]['requires']);
if (isset($requires['!Travis']) && getenv('TRAVIS') === true) {
$this->markTestSkipped('This test does not run on Travis.');
}
}
}
It didn't feel quite right. I'm in love with Laravel's Collection class, and there's a simple helper that allows you to convert an array to a Collection: collect()
. So I converted this array into a Collection and then used each()
to replace the foreach(['class', 'method'])
. I also dropped the array_flip
and simplified the checking and accessing of our requires blocks:
collect($this->getAnnotations())->each(function ($location) {
if (! isset($location['requires'])) {
return;
}
if (in_array('!Travis', $location['requires']) && getenv('TRAVIS') === true) {
$this->markTestSkipped('This test does not run on Travis.');
}
});
So, I grabbed my base TestClass
and placed this code into it, and now I can annotate any test to be skipped on Travis.
// TestCase.php
// Extending parent checkRequirements method
protected function checkRequirements()
{
parent::checkRequirements();
// Convert the list of annotations, which can be grouped by class and/or
// method, to an Illuminate collection, and then act on each
collect($this->getAnnotations())->each(function ($location) {
// Exit early if this annotation isn't @requires
if (! isset($location['requires'])) {
return;
}
// Look for a key with our value !Travis under the @requires annotation
if (in_array('!Travis', $location['requires']) && getenv('TRAVIS') == true) {
$this->markTestSkipped('This test does not run on Travis.');
}
});
}
// SomeTest
/**
* @requires !Travis
*/
public function test_it_does_something_i_would_like_to_skip_on_travis()
{
// Test stuff
}
Looking at it afterwards, I think it'd probably be better in this instance to use a custom-named annotation like @skipIfTravis
instead of overloading the @requires
annotation.
Let's imagine, just for a second, we had a @skipIfTravis
AND a @skipIfLocal
—not because that's a great idea, but just because it's an interesting opportunity to look at a broader architecture.
// TestCase.php
protected function checkRequirements()
{
parent::checkRequirements();
collect($this->getAnnotations())->each(function ($location) {
$this->handleTravisSkips($location);
$this->handleLocalSkips($location);
});
}
private function handleTravisSkips($location)
{
if (! array_key_exists(['skipIfTravis', $location)) {
return;
}
if (getenv('TRAVIS') === true) {
$this->markTestSkipped('This test does not run on Travis.');
}
}
private function handleLocalSkips($location)
{
if (! array_key_exists('skipIfLocal', $location)) {
return;
}
if (getenv('LOCAL') === true) {
$this->markTestSkipped('This test does not run on local environments.');
}
}
You can probably sense a bit of a smell here, where we're duplicating the structure with the handlers.
If you're like me, you're dreaming of allowing this requirements-checker to have "annotation handlers" registered. Something like $this->registerHandler($locationKey, $classToHandleThisLocation)
.
And I'm sure there are all sorts of more complicated and interesting frameworks or tools already out there (if so, let me know on Twitter!) I just had a little fun with this and wanted to document it as I went along. I hope it helped someone!
I forgot to mention this, but: if you did indeed only have a single annotation, you could clean up the collection operations a bit. We're really just filtering out the options that don't meet three criteria (has a @requires
annotation, !Travis
in the requires annotation array, and travis environment variable true), and then marking test as skipped for those which remain. I did a bit of optimization, and then worked with Illuminate filter-guru Adam Wathan to clean it up even a bit more. Note that we're relying on Illuminate's array_get
to combine the requires
checking and !Travis
checking, and then using and
to only skip the test if the getenv
returns true.
collect($this->getAnnotations())->filter(function ($location) {
return in_array('!Travis', array_get($location, 'requires', []));
})->each(function ($location) {
getenv('TRAVIS') and $this->markTestSkipped('This test hates Travis.');
});
]]>But sometimes the repetitive code isn't the view itself, but the conditional logic I'm running.
I'm writing a quick Laravel app for my physical trainer, and it needs to be multitenant, which means it needs to handle and route visits from myapp.com
but also username.myapp.com
. The way I'm handling this is that I want to have an App\Context
object available globally, which contains knowledge about the application's state.
I have a service provider that binds a new App\Context
object to the IOC, and it has knowledge of which URL was used to access it, which helps it know if the user is visiting from a "public" (myapp.com
) or "client" (username.myapp.com
) context.
So, I find myself writing conditionals all over the place to test whether I'm in a public context or a client context.
I originally was writing it like this, using Blade service injection:
// random-file.blade.php
@inject('context', 'App\Context')
@if ($context->isPublic())
// One thing
@else
// Another thing
@endif
It worked fine, but it did feel kind of nasty. I considered binding an $isPublic
or a $context
variable to every view using a global view composer, but it just didn't feel right. I happened to mention the situation to Taylor and he reminded me of the ability to create custom Blade directives.
Blade directives allow you to create a custom Blade tag. You might think of just outputting some HTML:
@myGreatTag
// produces:
<a href="#">Great things here</a>
Which you'd bind like this:
// AppServiceProvider
public function boot()
{
Blade::directive('myGreatTag', function () {
return '<a href="#">Great things here</a>';
});
}
... but there's not really much reason to do that. You could just accomplish that with a view partial: @include('partials.my-great-partial')
Where directives shine is when you realize that the output of your Blade tags don't get treated as just HTML—they are treated as PHP. For example:
@if (true === true)
actually converts to:
<?php if(true === true): ?>
And the binding for the @if
directive actually looks like this:
protected function compileIf($expression)
{
return "<?php if{$expression}: ?>";
}
So, now that we realize that, we realize that a Blade directive is allowing us to write shortcuts for PHP.
That means I can re-write my conditional like this:
@public
// public thing
@else
// non-public thing
@endif
And this is all it takes:
// AppServiceProvider
public function boot()
{
Blade::directive('public', function () {
return "<?php if (app('context')->isPublic()): ?>";
});
}
It might look strange, but remember, we're just returning a string that will be executed as PHP.
I could've chosen to figure out whether or not our current context isPublic
in the binding, and just outputted if(true)
or if(false)
, but that just seemed weird:
Blade::directive('public', function () {
$isPublic = app('context')->isPublic() ? 'true' : 'false';
return "<?php if ({$isPublic}): ?>";
});
And, it turns out, that actually wouldn't work! Since views are cached, it wouldn't re run this check on every page view.
But honestly, the sky is the limit here.
Of course, I could've named it @ifPublic
. Or I could actually create a whole set of conditionals. It could be @ifPublic
, @otherwiseBecauseYouKnowWhatTheHeck
, and @endIfPublicAndStuff
. Whatever I want.
Further, I can actually pass parameters into Blade directives, opening up all sorts of options for customization (and abuse):
// Bind:
Blade::directive('newlinesToBr', function($expression) {
return "<?php echo nl2br{$expression}; ?>";
});
// Use:
<p>@newlinesToBr($body)</p>
Now we're actually creating custom inline functions for formatting, or whatever insane ideas you decide to throw at it. Go wild. Well, don't really go wild; you could end up confusing yourself and any current/future developers on the project. But allow yourself to extend the capabilities of Blade so you're not repeating the same logic over and over, or resorting to inline <?php
blocks.
Go forth and simplify!
]]>I just finished reading over 200 applications for our latest job posting, a Web Developer job at Tighten Co.. We still hire infrequently enough and are small enough that the two founders (Dan and me) and our operations manager (Dave) read every single application, which is hours upon hours of work before we even get to our initial phone screen.
Some applicants, and some tendencies among applicants, have stood out as best practices, but many more things have stood out as consistent turnoffs. So, I figured I'd share some with you here.
Caveat: Applying for a job at a large company is often very different than applying for a job at a small consultancy like Tighten. This is not about "how to apply for any job ever," but instead, it's a reflection on what helps me, as a hiring supervisor at Tighten, choose to put your application in our phone screen stack.
Many applicants applied purely because they liked Tighten. I'm overjoyed that we have such a good reputation that folks will apply to our job postings without even bothering to read the description. A little bit of attention to addressing the requirements of this particular job would've been nice in these circumstances, because each job posting is unique depending on the mixture of frontend, backend, or other particular skills. But that's not too bad.
What is disappointing, however, is the number of applicants who missed key elements of the job description. For example, we only can handle to hire people from the U.S. and Canada right now. We wrote that explicitly on the job posting. And yet a solid 1/5th of our applicants were not from the U.S. or Canada.
This has happened before. So on the job posting this time, I wrote as a "Must Have" this item: "Reference a cat or a breed of cat in your application". I just snuck it in there. And only about a third of our applicants did reference a cat. ONE THIRD.
Now, again, I understand that quite a few applicants were just excited to apply to Tighten, and we didn't hold that against them. But it helped make it even more clear the distinction between people who really cared about this job posting (or our company) and those who just were bulk sending off applications to every remotely viable job post.
Aesthetic appeal is important on your resumé! But colors and fonts and styles and layouts and other complicated stuff can be distracting on any resumé, and this is doubly true when you're not a professional designer. Keep it simple.
Keeping your resumé simple also applies to the content. There's such a thing as Too Much Information on a resumé. Give me the information that will help me decide to hire you and leave the rest off.
First, we can smell a generic cover letter from 20 feet away. It says to us, "your company isn't important enough for me to spend 15 minutes writing you a custom cover letter." This is something that could be a huge part of your life for the next few years or more, and you can't spend 15 minutes on a custom cover letter?
Second, a generic cover letter probably doesn't do a good job of communicating how you meet our needs. Each job posting is carefully crafted to find someone who meets our particular needs at the moment, so that you have the opportunity in your application to show why you are that person. Your generic cover letter that is designed to get you any job in the entire field of IT is not the best tool you have to do that.
Third, the number of times someone has pasted a generic cover letter when applying to our job and left another company's name on it would blow your mind. Someone who does that is instantly rejected, regardless of the quality of their application. Don't be that person.
The best cover letter shows that you understand who we are and what we want, and communicates to use why you can meet all of our requirements.
It seems that the less qualified someone is, the more likely they are to apply to absolutely everything. Many developers that we'd love to work with are extremely empathic, which in turn means they're much less likely to apply a job posting that they don't think fits them perfectly.
This means we wade through many applications from people who don't meet any of our must-haves, but there are plenty of amazing developers who would be a great fit but don't fit 100% of our nice-to-haves and therefore never apply.
At least at Tighten, we intentionally separate our job requirements between "Must Have" and "Nice to Have". If you meet all the must haves, apply! The benefit of the Nice to Have is just so that you can distinguish yourself if you meet all those too, but don't let those keep you from applying.
We're going to check out your Twitter, and your web site, and we're going to read your resumé. We'll be curious about holes in your resumé, and the first things we check are your education, where your job history started, and what your most recent job looks like.
Quite a few applicants had web sites that were clearly from the late 1990's or early 2000's. Dan and I were both web developers then, so we can recognize a site from that time period well; additionally, your IE6 conditionals might be showing. :)
If you send something along to us, it should be how you want to present yourself to us. If something isn't up to snuff, and if you have to caveat it heavily, consider either re-making it, or just not sending it at all. Only a few people actually make it from the screening round to an actual phone conversation, so your materials you send along--cover letter, resumé, web site, etc.--need to speak for you. Are they saying the things you want said?
Know the company you're applying to. Is it a big company? Keep it formal, and stuff all your keywords in. You need to pass their HR screen, so I get it.
But if it's a small company like Tighten, especially if you know the founders read the applications, be a human. Yes, we want you to be professional. But we want to know what you're like as a person. We're entirely remote, so we need to know that you know how to communicate effectively; if you're overly formal, that will affect our perception of what you'll be like to have on a team.
For us, you might try: "Dear Dan & Matt", "Dear Hiring Manager," "Hey Tighten folks,", or the internal favorite at Tighten, "Yo Stauff-meister". (The last one is a joke. You probably shouldn't actually do that.)
Sorry, because I know developers love to hate LinkedIn. I hate it too. Its emails are painfully annoying. Its UI is horrible. But it's one of the things we're going to look at when we evaluate your application.
If you don't have a LinkedIn, that looks like an intentional choice. But if you have a LinkedIn with two connections and no profile picture and not any useful information, that shows that you're not concerned with managing your business relationships and reputation. It's not as if that's a not-hireable offense, but it looks a little bit like a broken-down-house on your property.
We love when applications are personal and fun. We want to get to know you, and we love hearing a little bit about your family and your hobbies. But for the cover lever, keep it to just a little bit. If two thirds of your cover letter is a story of every place you've lived for the last three years and we know more about your life than the people closest to you do, there's a name for that: overdisclosure. Keep it short.
I understand that many developers' work situations keep them from being able to share code samples from their work, and not every developer has time to do lots of open source work.
However, I'm not going to hire a developer without seeing their code. And if you send over examples of your work, I will read it. So if the only samples you have are from 10 years ago, you're better off not sending it in.
Rather, spend some time on a night or weekend and create a side project solely for the purpose of applying to jobs. It's worth the time. Even if it's not a fully functional project, just write a single page of PHP or Javascript or HTML or CSS. Give me something to work with.
In programming we talk about "code smells." A code smell doesn't mean code is bad, but just that you should be on guard.
Typos, grammar issues, and spelling issues are job application smells. Of course, we all make typos. But typos and spelling issues, especially when we see several of them, show that you don't have attention to detail as a personality trait, and that's a big turnoff.
Additionally, almost all such issues can be caught if you read over your application once or twice, if you run it through a spell checker, or if you have a friend read it over. Again, this job could be a huge part of your life over the next few years. Doesn't it merit a bit of attention?
It's completely fine if there's a particular reason for a gap in your work history, so if you know we're going to be asking a question ("Why no code samples?", "Why are you applying for this job if all your work experience is in Python?", etc.), please feel free to preemptively answer those questions for us.
But there's a difference between a preemptive answer and an excuse. Is it something you can fix up in a few hours, or even a weekend? Then just fix it. Is it a code sample that isn't a good fit? Don't send it. Make something else. Don't make excuses for things that you're capable of changing.
There were a few applicants whose applications stood heads and shoulders above the rest. These folks wrote compelling cover letters, custom application web sites, and were involved with impressive projects and open source tools.
Standing out in the application phase doesn't guarantee you a job. But it almost completely guarantees you get to the phone call round. If someone took the time to learn about our company, build a custom web site to apply for it, and to generally show us that this job is something special to them, not just one of the fifty jobs they applied to today, that's a special thing for us.
Do you have other tips for job applicants I didn't include here? Let me know on Twitter!
I also asked around on Twitter:
I’m writing a blog post giving devs tips on sending in an initial job application. Hiring managers, anything you *really* want devs to hear?
— Matt Stauffer (@stauffermatt) September 29, 2015
And got these responses:
@stauffermatt Keep it short. Make it easy to extract the information. Most important things first.
— Rik Heywood (@RikHeywood) September 29, 2015
@stauffermatt and explain why you want to work for me - sending your CV to 1000 companies in the hope one will bite? I’m not interested.
— Rik Heywood (@RikHeywood) September 29, 2015
@stauffermatt I don't much care that you can't spell perfectly, you know how to use tools to correct must of your mistakes don't you?
— Guillaume Rossolini (@g_rossolini) September 29, 2015
@stauffermatt however if you complain "if nobody gives me a chance I'll never have experience", then you are definitely not who I need.
— Guillaume Rossolini (@g_rossolini) September 29, 2015
@stauffermatt don't sell yourself as a lifeless robot. IT geeks do have a bad reputation when it comes to social skills - prove them wrong
— Salvatore Mulas (@salvomulas) September 29, 2015
@stauffermatt Why they're in this industry in the first place. Are they driven by passion or are they just "putting their time in"?
— Kory Gorsky (@KoryGorsky) September 29, 2015
@stauffermatt probably a no brainer but I hated when devs lied about what they claimed to know. Be honest and upfront with skills/experience
— Jarrod Rizor (@JarrodRizor) September 29, 2015
@stauffermatt keep it to 1 page, 2 max. include links to your best code samples. provide detail on the technologies you claim to know
— Daniel Abernathy (@dabernathy89) September 29, 2015
@stauffermatt It's awesome that you're cool/quirky, but the standard cover letter/resume formats exist for a reason. Disregard at your peril
— J.T. Grimes (@JT_Grimes) September 29, 2015
@stauffermatt Your Github account is not your resume - we're hiring for much more than coding ability.
— J.T. Grimes (@JT_Grimes) September 29, 2015
@stauffermatt an answer to the question "Why do you want to work specifically HERE in our company (i.e. what do you know about us)"
— Laravel Daily (@DailyLaravel) September 29, 2015
@stauffermatt Submit your application like this - https://t.co/T56OM4Kybj #instanthire
— Eric L. Barnes (@ericlbarnes) September 29, 2015
@stauffermatt If your resume creates any obvious questions (where was this guy for three years?) your cover letter should answer them.
— J.T. Grimes (@JT_Grimes) September 29, 2015
@stauffermatt tell me how you added *value* to the last places you worked at, and why you think this is relevant to this one.
— Deprecated BIF (@dch__) September 29, 2015
@stauffermatt show me you can communicate, have empathy, and resolve conflict & awkward situations. Tell me what sort of team env you prefer
— Deprecated BIF (@dch__) September 29, 2015
@stauffermatt @funkatron Decent grammar and spelling are important. Not paying attention to them shows me a lack of attention to details.
— Dylan Ribb (@dylanribb) September 29, 2015
]]>.env
file. Instructions below.
Craft is a fantastic CMS, but every CMS shows some pain points when you have a large team working on the same site at the same time. One of these points for me is Craft's native multi-environment configuration options, which allow you to define configuration options based on the domain name:
return [
'*' => [
'omitScriptNameInUrls' => true,
],
'example.dev' => [
'devMode' => true,
],
'example.com' => [
'cooldownDuration' => 0,
]
];
This is great, but it's limited: You're hard-coding the configuration details into your code, which sometimes means putting sensitive information into your version control. Every developer's local installs either all have to have different domains, or if they use the same domain they need to all have the same configuration settings. And something just feels dirty about the codebase having such knowledge of every place it's going to be deployed.
I've fallen in love with how easy dotenv and phpdotenv make it to keep particular variables (e.g. database connection information) unique for each environment (local, staging, production, etc.) without committing them all to version control (e.g. GitHub).
This is especially helpful when you're developing as a part of a team, who may have different connections across each of their unique "local" environments. It also is more secure, because your production database credentials aren't accessible to every person with access to your git repo.
phpdotenv
allows you to load in a file named .env
that sits in your project root (you can customize where it lives or what it's named, but that's the default) and add its keys/values to your $_ENV
global. Here's quick look at a sample .env
file:
DB_HOST=localhost
DB_NAME=my_web_site
DB_USER=root
DB_PASS=root
With phpdotenv
, all the values you set in .env
become accessible across your entire codebase via $_ENV
and the getenv()
function, so you can have a single file for your environment-specific variables that gets loaded at runtime and makes them accessible anywhere. That means, anywhere in your code, you can just write getenv('DB_HOST')
and it will return, in the example above, localhost
.
Additionally, you can require that each environment's .env
must contain certain fields, so that if a particular environment is missing certain keys, phpdotenv
will make a fuss until you fix it.
If you're confused as to how each environment has a unique .env
file, it's because you use .gitignore
to tell git to ignore that file, and in each environment you'll make a new copy from a template, named .env.example
.
Note: This requires using Composer. That might sound scary, but trust me, it's going to be simple.
vlucas/phpdotenv
First, create a file in your project root named composer.json
. Fill it with the following:
{
"require": {
"vlucas/phpdotenv": "^2.0"
}
}
If you haven't yet, install Composer.
Run composer install
.
index.php
Edit your public/index.php
file and add these lines to the top:
require_once('../vendor/autoload.php');
try {
$dotenv = new Dotenv\Dotenv(dirname(__DIR__));
$dotenv->load();
$dotenv->required(['DB_HOST', 'DB_NAME', 'DB_USER', 'DB_PASS']);
} catch (Exception $e) {
exit('Could not find a .env file.');
}
.env
and .env.example
Now create a file in the root named .env
. For now, fill it with this:
DB_HOST=localhost
DB_NAME=craft
DB_USER=root
DB_PASS=root
Duplicate that file and name the duplicate .env.example
.
Add these lines to your .gitignore
file:
/vendor/
.env
.env
with appropriate connection detailsNow go into craft/config/db.php
(if this is an existing site) and move those values into your .env
file so that it looks something like this:
DB_HOST=my.db.server.com
DB_NAME=mysite_craft
DB_USER=mysite_sql_user
DB_PASS=1395h901h91jr91
$_ENV
Update craft/config/db.php
to look like this:
return [
'server' => getenv('DB_HOST'),
'user' => getenv('DB_USER'),
'password' => getenv('DB_PASS'),
'database' => getenv('DB_NAME'),
'tablePrefix' => 'craft',
];
That's it! Your site is now getting its configuration details from your .env
file. Every time you spin up a new instance of this site, just create a new .env
file from the .env.example
template in the new environment and set its details appropriately.
As you've probably realized, you can set other properties in here; I use it to set BASE_URL
and then pull that in craft/config/general.php
as an environmentVariable
.
Composer allows us to pull in external code. So we initialized a new Composer configuration file (if you're a Composer guru, you might be mad that I didn't teach composer init
... I know, me too), and then told Composer to require this phpdotenv
package. Then we asked Composer to install it.
Then we needed to pull the Composer autoloader into our code so that we had access to any packages it installs. Once we had that, we could use Dotenv
's loader to pull in our .env
file and import its keys and values to our $_ENV
.
We updated our .env
to have the correct connection details, and then updated Craft's database configuration array to pull its details from our .env
using the getenv()
function.
This has two effects on your deployment.
First, every time you spin up a new environment, you need to copy .env.example
to .env
and fill out those details correctly for the new environment.
Second, your deployment servers all need Composer. Thankfully, every modern host has Composer on it. If you don't have a good host, I highly recommend Laravel Forge and DigitalOcean for quick and easy Craft hosting.
That's it! Look at you and your environment-specific configs go!
]]>Note: I just added Laravel's
env()
helper function to my craftPluginDevHelpers plugin. This allows you to set fallback default values, and it also converts boolean values liketrue
to actual PHP booleans. However, relying on an installed plugin for this is a bit sketchy, since if it's not installed yourenv
won't work right. So, I would recommend either manually checking for boolean strings (e.g.if (getenv('THINGISENABLED') == 'true')
), OR including Laravel's Illuminate/Support package via Composer.
I've held off on writing about it until now, because it has changed a lot over the span of its development. It's released as an Alpha now, so the API has solidified some... but it'll still change quite a bit between now and the release.
WARNING: This article is about an alpha release. This release is not intended to show the final API or feature set. Spark will change often before its release, and I won't always catch every change immediately. If you find any ways this guide has become out-of-date as Spark changes between Alpha and final release, please let me know here. If you hate something and think it's the worst idea you've ever seen, chill. This is just an alpha and there's no promise anything will stay the way it is right now.
If you want a quick, TL;DR version of how to install Spark, check out Laravel News' quick writeup. This is, instead, a deep-dive into how it works and what it does. I'll be writing a more general introduction to Spark once it's actually released, so beware: this is a bit of a deeper dive, for people geeky enough to want to look at an alpha release.
In case you're still having a bit of trouble understanding what Spark is really about, Spark is a tool designed to make it quicker for you to spin up SaaS applications, and it handles user authentications, plans and payments and coupons, and team logic.
Most SaaSes have these same components: user accounts, Stripe-based payments, and different payment plans. And many have payment coupons and team payment options.
Rather than re-creating this functionality with every new Laravel app you create, just use Spark, and you'll get all that and a free SaaS landing page to boot.
Like Laravel and Lumen, Spark has a global Composer installer to make installation simpler. To install the Spark installer, run this command:
composer global require "laravel/spark-installer=~1.0"
Note: Like with the other installers, you need to make sure that the global Composer
bin
is in your system's$PATH
so that when you runspark
from the command line, it'll findspark
from within that folder.
So, let's create a new Laravel application:
cd ~/Sites
laravel new spark-blog-post
cd spark-blog-post
The next step is to install Spark.
And then we install Spark:
spark install
That's it for installation. It'll give you several prompts; you'll probably just want to choose yes
for everything.
Finally, like it told you to, go into your .env
and add your Stripe Key & Secret, and, optionally, your Authy key.
If you, like me, develop on Homestead, the "migrations" step likely just did nothing. And if you hadn't edited your .env
before you ran spark install
, which I didn't tell you to, then it REALLY didn't do anything. But that's fine.
The best way to do it, if you're setting up this site on Homestead, is: now that you've installed Spark, go edit your .env
file to customize the database name that you'd like to use for this site. Then ssh into your Homestead box and migrate the database from there. Done.
Spark has done quite a bit here. Check it out:
This is changing some default views (e.g. changing the root from showing welcome
to showing spark::welcome
) and adding others (/home
, wrapped in the auth
middleware). It's also updating the User
model to make it Billable
and TwoFactorAuthenticatable
. It's adding quite a few Stripe-related fields to the User
that you'll be familiar with if you've ever used Laravel Cashier (and if you hadn't guessed, Cashier is a dependency of Spark).
It's adding CashierServiceProvider
and two SparkServiceProvider
s. It's updating the password reset email to be the Spark password reset email. It's updating the create tables
migration to add Cashier, Team, and Two-Factor Auth columns.
Finally, it's pulling in app.js
into Elixir and adding some Spark-specific Sass variables to app.scss
.
Spark also added quite a few files for you. Let's look through them.
The SparkServiceProvider
is where you do most of the customization, so it's accordingly huge.
<?php
namespace App\Providers;
use App\Team;
use Validator;
use Laravel\Spark\Spark;
use Illuminate\Http\Request;
use Laravel\Spark\Providers\AppServiceProvider as ServiceProvider;
class SparkServiceProvider extends ServiceProvider
{
/**
* Meta-data included in invoices generated by Spark.
*
* @var array
*/
protected $invoiceWith = [
'vendor' => 'Your Company',
'product' => 'Your Product',
'street' => 'PO Box 111',
'location' => 'Your Town, 12345',
'phone' => '555-555-5555',
];
/**
* Bootstrap any application services.
*
* @return void
*/
public function boot()
{
parent::boot();
//
}
/**
* Customize general Spark options.
*
* @return void
*/
protected function customizeSpark()
{
Spark::configure([
'models' => [
'teams' => Team::class,
]
]);
}
/**
* Customize Spark's new user registration logic.
*
* @return void
*/
protected function customizeRegistration()
{
// Spark::validateRegistrationsWith(function (Request $request) {
// return [
// 'name' => 'required|max:255',
// 'email' => 'required|email|unique:users',
// 'password' => 'required|confirmed|min:6',
// 'terms' => 'required|accepted',
// ];
// });
// Spark::createUsersWith(function (Request $request) {
// // Return New User Instance...
// });
}
/**
* Customize the roles that may be assigned to team members.
*
* @return void
*/
protected function customizeRoles()
{
Spark::defaultRole('member');
Spark::roles([
'admin' => 'Administrator',
'member' => 'Member',
]);
}
/**
* Customize the tabs on the settings screen.
*
* @return void
*/
protected function customizeSettingsTabs()
{
Spark::settingsTabs()->configure(function ($tabs) {
return [
$tabs->profile(),
$tabs->teams(),
$tabs->security(),
$tabs->subscription(),
// $tabs->make('Name', 'view', 'fa-icon'),
];
});
Spark::teamSettingsTabs()->configure(function ($tabs) {
return [
$tabs->owner(),
$tabs->membership(),
// $tabs->make('Name', 'view', 'fa-icon'),
];
});
}
/**
* Customize Spark's profile update logic.
*
* @return void
*/
protected function customizeProfileUpdates()
{
// Spark::validateProfileUpdatesWith(function (Request $request) {
// return [
// 'name' => 'required|max:255',
// 'email' => 'required|email|unique:users,email,'.$request->user()->id,
// ];
// });
// Spark::updateProfilesWith(function (Request $request) {
// // Update $request->user()...
// });
}
/**
* Customize the subscription plans for the application.
*
* @return void
*/
protected function customizeSubscriptionPlans()
{
// Spark::free()
// ->features([
// 'Feature 1',
// 'Feature 2',
// 'Feature 3',
// ]);
// Spark::plan('Basic', 'stripe-id')->price(10)
// ->trialDays(7)
// ->features([
// 'Feature 1',
// 'Feature 2',
// 'Feature 3',
// ]);
}
}
Let's walk through it one piece at a time.
This will customize the invoices that Spark generates. Put your information here, it ends up on the PDF. Done.
protected $invoiceWith = [
'vendor' => 'Your Company',
'product' => 'Your Product',
'street' => 'PO Box 111',
'location' => 'Your Town, 12345',
'phone' => '555-555-5555',
];
This allows you to add your own validation requirements and customize the user creation process.
protected function customizeRegistration()
{
// Spark::validateRegistrationsWith(function (Request $request) {
// return [
// 'name' => 'required|max:255',
// 'email' => 'required|email|unique:users',
// 'password' => 'required|confirmed|min:6',
// 'terms' => 'required|accepted',
// ];
// });
// Spark::createUsersWith(function (Request $request) {
// // Return New User Instance...
// });
}
The createUsersWith
function is run in the registration controller like this:
// Laravel\Spark\Repositories\UserRepository
protected function createNewUser(Request $request, $withSubscription)
{
if (Spark::$createUsersWith) {
return $this->callCustomUpdater(Spark::$createUsersWith, $request, [$withSubscription]);
} else {
return $this->createDefaultUser($request);
}
}
And, just for your customization purposes, this is what that method will do if you don't override it:
// Laravel\Spark\Repositories\UserRepository
protected function createDefaultUser(Request $request)
{
$model = config('auth.model');
return (new $model)->create([
'name' => $request->name,
'email' => $request->email,
'password' => bcrypt($request->password),
]);
}
By default, Spark has a role system for the Teams setting. You can set up your list of roles and your default here.
protected function customizeRoles()
{
Spark::defaultRole('member');
Spark::roles([
'admin' => 'Administrator',
'member' => 'Member',
]);
}
By default, Spark creates an account/admin panel with a few default tabs. You can remove tabs, re-order them, or add your own.
If you're using Teams, you can also customize the Team tabs.
protected function customizeSettingsTabs()
{
Spark::settingsTabs()->configure(function ($tabs) {
return [
$tabs->profile(),
$tabs->teams(),
$tabs->security(),
$tabs->subscription(),
// $tabs->make('Name', 'view', 'fa-icon'),
];
});
Spark::teamSettingsTabs()->configure(function ($tabs) {
return [
$tabs->owner(),
$tabs->membership(),
// $tabs->make('Name', 'view', 'fa-icon'),
];
});
}
```
##### customizeProfileUpdates()
Just like you could customize the validation logic and user creation process for user registration, you can do the same thing for the user profile update process.
```php
protected function customizeProfileUpdates()
{
// Spark::validateProfileUpdatesWith(function (Request $request) {
// return [
// 'name' => 'required|max:255',
// 'email' => 'required|email|unique:users,email,'.$request->user()->id,
// ];
// });
// Spark::updateProfilesWith(function (Request $request) {
// // Update $request->user()...
// });
}
The default behavior in the ProfileController
that you'd be overwriting in updateProfilesWith
is:
Auth::user()->fill($request->all())->save();
Like the name suggests, this is where you determine what your plans are. You can add free plans, monthly plans, yearly plans, and define the price, number of trial days, and feature list.
This code snippet shows everything except a yearly plan, which you'd define by adding a ->yearly()
fluent call to your plan definition.
protected function customizeSubscriptionPlans()
{
// Spark::free()
// ->features([
// 'Feature 1',
// 'Feature 2',
// 'Feature 3',
// ]);
// Spark::plan('Basic', 'stripe-id')->price(10)
// ->trialDays(7)
// ->features([
// 'Feature 1',
// 'Feature 2',
// 'Feature 3',
// ]);
}
There's a new model for Teams, if you're going to use them.
<?php
namespace App;
use Laravel\Spark\Teams\Team as SparkTeam;
class Team extends SparkTeam
{
//
}
It's extending this model for customization (trimmed for brevity):
<?php
namespace Laravel\Spark\Teams;
...
class Team extends Model
{
protected $table = 'teams';
protected $fillable = ['name'];
/**
* Get all of the users that belong to the team.
*/
public function users() {}
/**
* Get the owner of the team.
*/
public function owner() {}
/**
* Get all of the pending invitations for the team.
*/
public function invitations() {}
/**
* Invite a user to the team by e-mail address.
*
* @param string $email
* @return \Laravel\Spark\Teams\Invitation
*/
public function inviteUserByEmail($email) {}
/**
* Remove a user from the team by their ID.
*
* @param int $userId
* @return void
*/
public function removeUserById($userId) {}
}
Here's the up
migration:
// Create Teams Table...
Schema::create('teams', function (Blueprint $table) {
$table->increments('id');
$table->integer('owner_id')->index();
$table->string('name');
$table->timestamps();
});
// Create User Teams Intermediate Table...
Schema::create('user_teams', function (Blueprint $table) {
$table->integer('team_id');
$table->integer('user_id');
$table->string('role', 25);
$table->unique(['team_id', 'user_id']);
});
// Create Invitations Table...
Schema::create('invitations', function (Blueprint $table) {
$table->increments('id');
$table->integer('team_id')->index();
$table->integer('user_id')->nullable()->index();
$table->string('email');
$table->string('token', 40)->unique();
$table->timestamps();
});
This is where your general application JavaScript should go, and it's pre-filled with some Spark JavaScript.
/*
|--------------------------------------------------------------------------
| Laravel Spark - Creating Amazing Experiences.
|--------------------------------------------------------------------------
|
| First, we will load all of the "core" dependencies for Spark which are
| libraries such as Vue and jQuery. Then, we will load the components
| which manage the Spark screens such as the user settings screens.
|
| Next, we will create the root Vue application for Spark. We'll only do
| this if a "spark-app" ID exists on the page. Otherwise, we will not
| attempt to create this Vue application so we can avoid conflicts.
|
*/
require('laravel-spark/core/dependencies');
if ($('#spark-app').length > 0) {
require('./spark/components')
new Vue(require('laravel-spark'));
}
This pulls in the VueJS components for the individual pages.
/*
|--------------------------------------------------------------------------
| Spark Page Components
|--------------------------------------------------------------------------
|
| These components control the user settings screens for Spark. You will
| change these paths to your own custom components if you need to use
| your own component to provide custom logic for your applications.
|
| In addition, the components which control the team settings and member
| management are also included. Again, you may change these paths and
| require your own custom-built components to manage these screens.
*/
require('laravel-spark/settings/dashboard/profile')
require('laravel-spark/settings/dashboard/security/password')
require('laravel-spark/settings/dashboard/security/two-factor')
require('laravel-spark/settings/team/owner')
require('laravel-spark/settings/team/membership/edit-team-member')
This is the default home page that we looked at before.
@extends('spark::layouts.app')
@section('content')
<!-- Main Content -->
<div class="container spark-screen">
@if (Spark::usingTeams() && ! Auth::user()->hasTeams())
<!-- Teams Are Enabled, But The User Doesn't Have One -->
<div class="row">
<div class="col-md-10 col-md-offset-1">
<div class="panel panel-default">
<div class="panel-heading">You Need A Team!</div>
<div class="panel-body bg-warning">
It looks like you haven't created a team!
You can create one in your <a href="/settings?tab=teams">account settings</a>.
</div>
</div>
</div>
</div>
@else
<!-- Teams Are Disabled Or User Is On Team -->
<div class="row">
<div class="col-md-10 col-md-offset-1">
<div class="panel panel-default">
<div class="panel-heading">Dashboard</div>
<div class="panel-body">
Your Application's Dashboard.
</div>
</div>
</div>
</div>
@endif
</div>
@endsection
This file will generate your "Terms of Service" page.
This page is generated from the `terms.md` file in your project root.
Phew. That was a lot. Let's actually see what we get here. First, we get the beautiful landing page we saw in the screenshot above. But what else?
First, let's check out our login page. Notice we have a nice Bootstrap-based layout with a fixed footer, Copyright information, and some basic social links.
We also got a Password Reset page:
And a Register page:
And terms of service:
Notice we also have error handling baked in:
OK, time to register. Once we do, we hit the dashboard (/home
), which has a dropdown menu allowing us to logout and edit our settings:
Let's check out that User Settings Page. Notice that these are the tabs that we could've edited in the SparkServiceProvider
above.
Edit your password and two-factor auth:
That's it out-of-the-box. Let's explore some more concepts.
In order to enable your users to join teams, you need to use the CanJoinTeams
trait in your User model. Thankfully, Spark already imported that class in your User model's import block, so it's as simple as adding CanJoinTeams
to the use
Trait list in your model:
// app/User.php
class User ...
{
use Authorizable, Billable, CanJoinTeams, CanResetPassword, TwoFactorAuthenticatable;
Now when we visit our user settings panel, we see something a little different:
Let's add a team.
In this context, I choose to think of a Team as like an Account. We have a web app where you login, pay, and see your data not by a single user account, but by a Team/Group which has a single Owner. The Owner is responsible for paying and appointing admins; the admins and owner can invite and delete other users. Every user on the team uses it and accesses the same data, possibly with unique roles.
So, let's try editing a team and see what we get.
Of course, we want to invite someone else to our team.
Note: if you get a validation error the first time you try to invite someone to your team, check your logs; it's likely because the default Mail configuration (in
.env
) sends to Mailtrap.io. You can either change this, or set up Mailtrap so that your email works.
Here's what we see until they accept:
And what they see once they click the link in their email:
Once that user signs up, they'll have the opportunity to leave your team:
Note: There are no default restrictions around creation of teams. Users can create as many teams as they like. It's up to you to constrain them using
validateNewTeamsWith()
.
If you want to customize the validation of new team creation, check out the validateNewTeamsWith()
method. As of the writing of this post, it's not shown by default in SparkServiceProvider
, but you can go into the customizeSpark()
method and add a call to it:
protected function customizeSpark()
{
Spark::configure(...);
Spark::validateNewTeamsWith(function() {
// Validate here...
});
}
Also, note that, if you have teams enabled, you'll be prompted to name your team when you sign up:
Once you add the CanJoinTeams
trait to your user model, they'll gain a few useful methods, including:
$user->hasTeams()
shows whether they have any teams that they're associated with.
$user->current_team
or $user->currentTeam()
accesses the currently-selected team.
$user->ownsTeam($team)
determines whether the user owns the team passed in.
$user->teamRole($team)
gets the role for the member's relationship to the team passed in.
Until now, every sample I've show has been how the app works without Stripe and plans set up. Let's now go add a Stripe key and secret and add some plans to the SparkServiceProvider
.
Like with Cashier, you need to add the plan to Stripe first. Let's add a free plan, and a yearly and monthly version of the same plan. Now, let's add them to the SparkServiceProvider
:
// SparkServiceProvider
protected function customizeSubscriptionPlans()
{
Spark::free()
->features([
'Feature 1',
'Feature 2',
'Feature 3',
]);
Spark::plan('Basic Monthly', 'basic-monthly')->price(10)
->trialDays(7)
->features([
'Feature 1',
'Feature 2',
'Feature 3',
]);
Spark::plan('Basic Yearly', 'basic-yearly')->price(120)
->trialDays(7)
->yearly()
->features([
'Feature 1',
'Feature 2',
'Feature 3',
]);
}
All of a sudden, we get a Subscription tab:
And check the registration flow now:
Then the registration page, taking your payment information:
You can check their plan in your code:
Auth::user()->getStripePlan();
Spark passes coupon requests along to Stripe, so you don't need to do anything to add them except to add the coupon to Stripe. Just have the users pass the coupon as a parameter when they visit the registration page:
http://yourapp.com/register?coupon=yourCouponCodeHere
Without having to write any code, you just got your coupon hooked right in:
You can also temporarily add a site-wide coupon by adding this in the SparkServiceProvider
(likely in the customizeSpark()
method):
Spark::promotion('coupon-code-here');
You can define the roles for your team in SparkServiceProvider
like we showed above.
// SparkServiceProvider
protected function customizeRoles()
{
Spark::defaultRole('member');
Spark::roles([
'admin' => 'Administrator',
'member' => 'Member',
]);
}
You can customize the default role, choose the options (each with a key and a label), and once you create these roles you can check for them elsewhere:
echo Auth::user()->teamRole(Auth::user()->current_team);
I'm guessing there will be (or there are already and I haven't found it yet) simpler ways to get and check this sort of information, but it's in there already if you use code like the above.
By default Spark publishes a few views. If you want more, there are two options: one for the basic views, and another for all views.
php artisan vendor:publish --tag=spark-basics
Which outputs these views:
resources/views/vendor/spark/emails/auth/password/email.blade.php
resources/views/vendor/spark/emails/billing/invoice.blade.php
resources/views/vendor/spark/emails/team/invitations/new.blade.php
resources/views/vendor/spark/emails/team/invitations/existing.blade.php
resources/views/vendor/spark/welcome.blade.php
resources/views/vendor/spark/nav/guest.blade.php
resources/views/vendor/spark/layouts/app.blade.php
resources/views/vendor/spark/common/footer.blade.php
resources/views/vendor/spark/nav/authenticated.blade.php
resources/views/vendor/spark/layouts/common/head.blade.php
resources/views/vendor/spark/settings/tabs/profile.blade.php
resources/views/vendor/spark/settings/tabs/security.blade.php
resources/views/vendor/spark/settings/team/tabs/owner.blade.php
resources/views/vendor/spark/auth/registration/simple/basics.blade.php
resources/views/vendor/spark/auth/registration/subscription/basics.blade.php
resources/views/vendor/spark/settings/team/tabs/membership/modals/edit-team-member.blade.php
Or, there's the full export:
php artisan vendor:publish --tag=spark-full
This outputs every file in the entire spark directory, which is far too many to list here. They'll all end up in resources/views/vendor/spark
.
If you want to disable Two-Factor Authentication, add a protected $twoFactorAuth = false;
property on your SparkServiceProvider
.
The Spark class has a few other methods available on it; here are a few of note:
Spark::forcingPromotion()
returns whether or not we're forcing a promotion site-wide at the moment.
Spark::retrieveusersWith()
allows you to customize the method Spark uses to retrieve the current user.
You made it! This is a LOT, I know. Once Spark is settled, I'll write another bog post that's less of a deep dive and more of a general introduction to how Spark works, but since you're brave and looking at the alpha, I gave you a deeper dive.
In general, I couldn't be more excited about Spark. We write this sort of code so often and having a pre-built set of tools to do it for you--especially with as much nuance and customization as Spark provides--is amazing.
There's a lot more going on under the hood. I just revealed the pieces here that I think will be most interesting. Like I wrote before, this will all change; I'll do my best to keep it up to date, but I'd love your help in pointing out if I've missed anything.
]]>But if you needed to control access to certain sections of the site, or turn on or off particular pieces of a page for non-admins, or ensure someone can only edit their own contacts, you needed to bring in a tool like BeatSwitch Lock or hand-roll the functionality, which would be something called ACL: Access Control Lists, or basically the ability to define someone's ability to do and see certain things based on attributes of their user record.
Thankfully, Taylor and Adam Wathan wrote an ACL layer in Laravel 5.1.11 that provides this functionality without any added work.
The out-of-the-box Laravel ACL is called Gate
(that's not a product name like "Spark", but rather the name of the classes and the façades).
Using the Gate
classes (either injecting them or using the Gate façade) allows you to easily check if a user (either the currently-logged-in user or a specific user) is "allowed" to do a certain thing. Check out this syntax for a taste:
if (Gate::denies('update-contact', $contact)) {
abort(403);
}
Drop that into your controller and it checks the currently authenticated user against a ruleset that you defined (and which you named update-contact
); it takes the data of that particular contact, checks it agains the ruleset, and returns whether or not the user is authorized.
You can also check for Gate::allows
, you can use it in conditionals in Blade with @can
, and there's much, much more. So, let's take a look.
Everything with Laravel's ACL is founded on a concept called an "Ability." An Ability is a key (e.g. "update-contact") and a Closure (with optional parameters) that returns either true or false.
Let's define an Ability in the default location, the AuthServiceProvider
:
...
class AuthServiceProvider extends ServiceProvider
{
public function boot(GateContract $gate)
{
parent::registerPolicies($gate);
$gate->define('update-contact', function ($user, $contact) {
return $user->id === $contact->user_id;
});
}
}
As you can see, the first parameter for our Closure is the user. If there's no currently-authenticated user (and if you haven't specified one--we'll see that later), Gate will automatically return false for every Ability.
Just like most other places in Laravel that accept Closures (e.g. route definition), you can pass a class name and method into the second parameter of define
instead of a Closure, and it'll be resolved out of the Container:
$gate->define('update-post', 'PostACLCheckerThingie@update');
Gate
allows you to check using the following methods: check
, allows
, or denies
. Note that check
is just the same as allows
, and denies
is exactly the opposite of allows
; so it's really just allows
with a clone named check
and an opposite check named denies
.
If you're using the façade, you won't need to pass in the user; the façade automatically passes in the currently authenticated user for you.
if (Gate::denies('update-contact', $contact)) {
abort(403);
}
if (Gate::allows('create-contact')) {
redirect('hooray');
}
Or, if you've defined an Ability with multiple parameters:
$gate->define('delete-interaction', function ($user, $contact, $interaction) {
// Do stuff...
});
Just pass an array to the second parameter:
if (Gate::allows('delete-interaction', [$contact, $interaction]) {
// Do stuff...
});
What if you want to check this ability for a specific user, instead of the currently authenticated user?
if (Gate::forUser($user)->denies('update-contact', $contact)) {
abort(403);
}
As always, you can inject the class itself instead of using the façade. The class you'll inject is the same GateContract
that's injected into the AuthServiceProvider
: Illuminate\Contracts\Auth\Access\Gate
.
public function somethingResolvedFromContainer(Gate $gate)
{
if ($gate->denies('create-team')) {
// etc.
}
}
Laravel's App\User
model now provides can
and cannot
, which mirror allows
and denies
on the Gate
. This comes from the Authorizable
trait.
So, if we have a user somewhere, we can check can()
on them:
if ($user->can('update-contact', $contact)) {
// Do stuff
}
You can also use can
(optionally with else
) in Blade:
<nav>
<a href="/">Home</a>
@can('edit-contact', $contact)
<a href="{{ route('contacts.edit', [$contact->id]) }}">Edit This Contact</a>
@endcan
</nav>
What if you have the idea of a superuser, or admin? Or what if you want to be able to set a temporary toggle to change the ACL logic for your users?
The before
function allows you to return early, before all of your other checks, in certain exceptional circumstances.
$gate->before(function ($user, $ability) {
if ($user->last_name === 'Stauffer') {
return true;
}
});
Or, more realistically:
$gate->before(function ($user, $ability) {
if ($user->isOwner()) {
return true;
}
});
There's another concept that you can (optionally) use to define access logic in your applications. It's an organizational structure that'll help keep you from crudding up the AuthServiceProvider
; it's almost like a controller, in that it helps you group your ACL logic based on the resource that it's controlling access to.
You can generate a policy with Artisan:
php artisan make:policy ContactPolicy
Then you register it with the AuthServiceProvider
in the policies
property:
class AuthServiceProvider extends ServiceProvider
{
protected $policies = [
Contact::class => ContactPolicy::class,
];
Here's what our auto-generated Policy looks like:
<?php
namespace App\Policies;
class ContactPolicy
{
/**
* Create a new policy instance.
*
* @return void
*/
public function __construct()
{
//
}
}
So, let's define the update
method:
<?php
namespace App\Policies;
class ContactPolicy
{
public function update($user, $contact)
{
return $user->id === $contact->user_id;
}
}
Note: I've primarily been using the
update
method as an example here. There are a few situations (see below) where the method name matters, because it needs to sync with the calling method. But you'll like have a variety of method names:show
,create
, or evenaddInteraction
.
If there's a policy defined for a resource type, the Gate
will use the first parameter key to figure out which method to check on your policy.
So, to check if you can update a Contact, just pass the contact in and check for the update
Ability. This code will pass to the update
method on the ContactPolicy
:
if (Gate::denies('update', $contact)) {
abort(403);
}
This also works for the User model checking and Blade checking.
Additionally, there's a policy
helper that allows you to retrieve a policy class and run its methods:
if (policy($contact)->update($user, $contact)) {
// Do stuff
}
Since much of the authorization will be quitting out of a controller method if the Ability is denied, there's a shortcut for that when in a Controller (which is added via the new AuthorizesRequests
trait):
public function update($id)
{
$contact = Contact::findOrFail($id);
$this->authorize('update', $contact);
// Do stuff...
}
Just like in our examples above, this will throw a 403
error if the authorization fails.
And finally, if your controller method name lines up with the same method name on the Policy (e.g. the update
controller method and the update
method on the Policy), you can skip the first parameter of authorize
entirely:
public function update($id)
{
$contact = Contact::findOrFail($id);
$this->authorize($contact);
// Do stuff...
}
This is one of my favorites; you might not love the magic, but since this is something I do so often, I'm really excited to trim down the amount of my controller methods that are dedicated to the same ACL logic, over and over.
That's it. As someone who has written ACLs dozens of times, I can say: This is better than anything I've built, simpler than anything I've imported from others, and does everything I need.
If you need anything like Roles, user groups, or database-defined permissions levels, you'll still need to do some of the work yourself--and you may still find yourself reaching for an external package. But for most circumstances, this is more than enough, and just as simple as it can be.
]]>DROP TABLES
command run on a live production database. It happened about four hours before I was scheduled to turn on database backups, and about three days before I was scheduled to move the site to a server that Linode backs up daily. If you're not familiar with this, it basically means we lost every tiny little piece of data on a live server. Everything.
Needless to say, we all freaked. It's a product in alpha, with just a few very early testers, which is why it was on a crappy server. But this was still terrible for us; we want our customers to be able to rely on us to keep their data safe, even in an early alpha. We had one or two users who weren't just kicking the tires but were actually using this for their day-to-day work. I hadn't gotten the backups turned on because I had been traveling and I had a reminder set at 9:30pm to turn them on.
I may expand this blog post later, but in short: I've worked with my team since 6pm eastern (it's 2:35am eastern right now), I'm exhausted and need to sleep, but we went from every database table is not just truncated but entirely deleted to we have all of our data back in one piece.
It's all due to the magician that is Aleks, the man beind Twindb. Through his articles, and a bit of Twitter and email help, I was able to extract our old data from the deepest depths of InnoDB's operating archives.
In short, we used the tool sys_parser
to recover our SQL schema ([https://twindb.com/recover-table-structure-from-innodb-dictionary/](Recover table structure from InnoDB dictionary)). Then we installed Twindb's InnoDB recovery kit using the instructions on the first half of this post: How to Recover InnoDB Dictionary. Finally, we used the instructions at Recover InnoDB Table After DROP to extract the pages of the InnoDB storage file into individual files using stream_parser
, and with a combination of grep
and c_parser
, we identified which page each table was stored in, and then used c_parser
and the schema SQL files we generated with sys_parser
to create files we could import into SQL.
If I have enough energy to talk about this later, I'll write it up further. But it needs to be said: We wouldn't've gotten anywhere without the help of the man behind Twindb, Aleksandr Kuzminsky. Just look at this Twitter thread. He also emailed with me in the midst of traveling to a camp site. I can't thank him enough for his help on this.
Also, many, many friends responded to my request for help on Twitter. Thank you so much.
So, know: it is possible to recover data even after a full drop/truncate. It's a lot of work, and you're much better off just backing up your data regularly. But if you're already in that spot... it can be done.
]]>But, whether or not you know it, any login forms are likely to get a lot of automated login attempts. Most login forms don't stop an automated attack trying email after email, password after password, and since those aren't being logged, you might not even know it's happening.
The best solution to something like this is to halt a user from attempting logins after a certain number of failed attempts. This is called login throttling, or rate limiting.
Graham Campbell wrote a great package called Laravel Throttle to address this in previous versions of Laravel, but in Laravel 5.1 Login throttling comes right out of the box.
By default, Laravel 5.1's AuthController
already imports the ThrottlesLogins
trait, so every new Laravel 5.1 app already has this enabled out of the box.
<?php
namespace App\Http\Controllers\Auth;
use App\User;
use Validator;
use App\Http\Controllers\Controller;
use Illuminate\Foundation\Auth\ThrottlesLogins;
use Illuminate\Foundation\Auth\AuthenticatesAndRegistersUsers;
class AuthController extends Controller
{
use AuthenticatesAndRegistersUsers, ThrottlesLogins;
In order for it to work, you just need to display errors on your login page, which you'll likely already have because you need to display "bad username/password" type errors; something like this:
@if (count($errors) > 0)
<div class="alert alert-danger">
<strong>Whoops!</strong> There were some problems with your input.<br><br>
<ul>
@foreach ($errors->all() as $error)
<li>{{ $error }}</li>
@endforeach
</ul>
</div>
@endif
Once you do, anyone who has 5 failed logins in a row will be stopped from logging in for 60 seconds. Both of these values are customizable; read below to see how.
If you check out the ThrottlesLogins
trait, you can see it's incrementing a cache counter on every failed login. The cache key for whether you're treated as the same user is based on username and IP address:
return 'login:attempts:'.md5($username.$request->ip());
That leaves us with code looking like this:
$attempts = $this->getLoginAttempts($request);
$lockedOut = Cache::has($this->getLoginLockExpirationKey($request));
if ($attempts > $this->maxLoginAttempts() || $lockedOut) {
if (! $lockedOut) {
Cache::put(
$this->getLoginLockExpirationKey($request), time() + $this->lockoutTime(), 1
);
}
return true;
}
return false;
We can also learn that we can customize how long it locks them out by simply setting a lockoutTime
property on our AuthController
:
private function lockoutTime()
{
return property_exists($this, 'lockoutTime') ? $this->lockoutTime : 60;
}
The same is true for the number of login attempts; customize that with a maxLoginAttempts
property. We learn that here:
protected function maxLoginAttempts()
{
return property_exists($this, 'maxLoginAttempts') ? $this->maxLoginAttempts : 5;
}
So: upgrade to Laravel 5.1 and you get free login throttling with a simple trait. Start new apps in Laravel 5.1 and they get login throttling for free out of the box. Security FTW.
]]>I'm working on a new little micro-SaaS that is purely dependent on GitHub in order to operate, so there's no reason to set up any user flow other than just GitHub. Let's do it.
There's a little bit of knowledge about how OAuth works that'll help to get started. The general flow for an OAuth application that's authenticating the way we will be here is:
12345
) and a call back URL (http://mysocialiteapplication.com/auth/github/callback
, maybe).MySocialiteApplication.com
) uses the token to request a dump of information about the authenticating user from GitHub.OK, so now you understand the basic flow, let's get started with Socialite.
$ composer require laravel/socialite
To hook Socialite into your Laravel application, edit config/app.php
and add the following line to the Service Providers array:
Laravel\Socialite\SocialiteServiceProvider::class,
and add the following line to the Aliases array:
'Socialite' => Laravel\Socialite\Facades\Socialite::class,
Socialite is now hooked into your application and booting. Let's get your credentials set up.
For this particular example, we'll be using GitHub as our authentication provider. If you wanted, though, you could use one of the other options, or you could set up multiple authentication options.
First, let's go to GitHub and set up a new Application. If you're new to OAuth, you'll need to create an account and then an Application with each service provider. The application will ask you questions like "What's the name of your application", "What's the callback URL", etc.
Once you complete it, you'll usually get a client ID and a client Secret, which we need to capture in order to configure Socialite correctly.
So, visit GitHub's New Application page, fill out the form, and grab your client ID and secret.
For now, I'm setting the callback URL to be:
http://mysocialiteapplication.app:8000/auth/github/callback
, but you should set it to whatever the GitHub callback URL will be on your development environment.
Note: You can create a separate Application later for the production version of the site that has the production callback URL, or you can just adjust this once you go live--but that'll mean this authentication feature won't work locally anymore.
Now, let's paste the authentication information in where Socialite can access it. This lives in config/services.php
, and as you can see, I've chosen to reference environment variables instead of pasting the values directly in:
'github' => [
'client_id' => env('GITHUB_ID'),
'client_secret' => env('GITHUB_SECRET'),
'redirect' => env('GITHUB_URL'),
],
So, let's add those values to our .env.example
and to our .env
files:
GITHUB_ID=client id from github
GITHUB_SECRET=client secret from github
GITHUB_URL=http://mysocialiteapplication.app:8000/auth/github/callback
Boom. Socialite can now hit GitHub for you. Let's set up our routes and controller methods.
Add these routes to routes.php
(you can make them anything you want, but this is the convention):
Route::get('auth/github', 'Auth\AuthController@redirectToProvider');
Route::get('auth/github/callback', 'Auth\AuthController@handleProviderCallback');
Now let's fill out those controller methods:
In Auth\AuthController
:
/**
* Redirect the user to the GitHub authentication page.
*
* @return Response
*/
public function redirectToProvider()
{
return Socialite::driver('github')->redirect();
}
/**
* Obtain the user information from GitHub.
*
* @return Response
*/
public function handleProviderCallback()
{
try {
$user = Socialite::driver('github')->user();
} catch (Exception $e) {
return Redirect::to('auth/github');
}
$authUser = $this->findOrCreateUser($user);
Auth::login($authUser, true);
return Redirect::to('home');
}
/**
* Return user if exists; create and return if doesn't
*
* @param $githubUser
* @return User
*/
private function findOrCreateUser($githubUser)
{
if ($authUser = User::where('github_id', $githubUser->id)->first()) {
return $authUser;
}
return User::create([
'name' => $githubUser->name,
'email' => $githubUser->email,
'github_id' => $githubUser->id,
'avatar' => $githubUser->avatar
]);
}
You can structure this code any way you want; many folks will create an authentication service and inject it in. Handle it however you want, but the above is what it should be doing in general.
First we redirect to GitHub; when the redirect comes back, we grab the relevant information, and then we either look up the user and authenticate as that user, or we create a new user and authenticate.
We need to update our users
migration so that it will allow us to store some GitHub-specific information. There's more information that comes back from GitHub, but here's what I chose to store. Since this is a new app, I could just modify the users
migration, but if you have an existing app, you'll need to make a new migration.
$table->increments('id');
// Cached from GitHub
$table->string('github_id')->unique();
$table->string('name');
$table->string('email');
$table->string('avatar');
$table->rememberToken();
$table->timestamps();
Finally, let's update the Eloquent User model so we can fill the new GitHub fields:
/**
* The attributes that are mass assignable.
*
* @var array
*/
protected $fillable = ['name', 'email', 'github_id', 'avatar'];
That's it. Let's see how this works now.
Set up a button on some page on your site that directs to auth/github
.
The user will be sent to GitHub, asked to give permissions to your app, and upon approval, sent back to auth/github/callback
.
Then your callback code will run and will either log in the pre-existing user or create a new user and log in as that.
Then the user should be forwarded to home
, and you now have a Laravel-authenticated user, with a Laravel user session and all of the functionality of the Auth
façade and driver available to you.
You can go build a logout
route that runs Auth::logout
or whatever else you like; at this point, you're up and running using Socialite for GitHub authentication! Enjoy!
Quick note: if you plan to use the social services for login only, and you want to hook it into Laravel's user and authentication systems as your primary use (like we did here), there's a package for that: Adam Wathan's Eloquent OAuth L5. It takes what we did here and makes it a lot simpler. Watch this video to learn how it works.
This is the sample code for using his code:
// Redirect to Facebook for authorization
Route::get('facebook/authorize', function() {
return OAuth::authorize('facebook');
});
// Facebook redirects here after authorization
Route::get('facebook/login', function() {
// Automatically log in existing users
// or create a new user if necessary.
OAuth::login('facebook');
// Current user is now available via Auth facade
$user = Auth::user();
return Redirect::intended();
});
As you can see, some of the work we had to do in our controller code has been abstracted away.
If you're using social services on top of other authentication systems, though, or if you're using it for functional components like pulling information from Github, or if it's not hooking into Eloquent users, you'll want to stick with Socialite.
Whichever tool you use, just know that it's very, very simple to set up social logins and authentication with your Laravel applications.
]]>It's now super easy to exclude specific routes from your CSRF middleware:
// app/Http/Middleware/VerifyCsrfToken
protected $except = [
'webhook/*'
];
Learn more about how, and why, it works at Laravel News': Excluding Routes from the CSRF Middleware.
]]>Before I blindly assume PHPStorm is the only way to go, I wanted to see: Can I bring the things a PHP-focused IDE provides PHP developers back to Sublime Text and get the best of both worlds?
Let's start with a quick list of ways that PHPStorm really sets itself apart for me. Please note: There are a million other features that PHPStorm uniquely offers, but to be honest, it's the tiny little conveniences that I've seen provide the biggest boost in efficiency.
Also note: This is Sublime Text 3 we're talking about.
Without most of these wonderful PHP-focused features, it'll be hard to recommend using something other than PHPStorm, even if it's slower and costlier and uses more memory. So. Can we reproduce them in Sublime Text?
use
(import) of classesBefore we talk about anything else, you at least need to know how to install packages in Sublime Text.
If you haven't yet, Go install Package Control now.
Unless otherwise specified, every package after this should be installed using Package Control.
The most significantly PHP-focused package for Sublime Text is called Sublime PHP Companion.
Like most packages, it contains a series of actions you can perform. ~~They're mapped to certain keys by default, but you can always re-map them.~~ Update: there is no keymapping by default anymore. Learn more about how to set up PHPCompanion keymapping here.
F10
) - When your cursor is over a class name, this command makes it simple to use
(import) that class.
F9
) - Same as find_use
but instead of expanding the class in the import block, it expands its FQCN inline. F8
) - Adds the namespace for the current file based on the file's path.shift+F12
) - Same as Sublime Text's native goto_definition (described below), but scoped in a PHP-aware manner.The package isn't perfect, and it is clearly not as bright as PHPStorm is when it comes to detecting namespaces and parsing some weird edge cases. But for day-to-day work, this is a huge boost in the PHP-code-knowledge area.
Sublime PHP Companion doesn't sniff your classes and give you autocompletion, sadly, but SublimeAllAutocomplete does register the names of all symbols (functions, classes, etc.) in any files you have open in other tabs and add those to the autocomplete register.
This isn't quite the same as full userland-code-sensitive autocompletion, but it helps a lot.
Sublime PHP Companion makes it easy to right click on functions and go to their definitions, but this shortcut brings back PHPStorm's CMD-click-to-definition. FYI, in Sublime Text CMD (or windows' ctrl key or whatever it is on other systems) is called "Super".
First, create a user mousemap file. If you don't have one, go here:
Linux
Create Default (Linux).sublime-mousemap
in ~/.config/sublime-text-3/Packages/User
Mac
Create Default (OSX).sublime-mousemap
in ~/Library/Application Support/Sublime Text 3/Packages/User
Windows
Create Default (Windows).sublime-mousemap
in %appdata%\Sublime Text 3\Packages\User
Next, place this in the file:
[
{
"button": "button1",
"count": 1,
"modifiers": ["ctrl"],
"press_command": "drag_select",
"command": "goto_definition"
}
]
You just taught Sublime Text this: "when I hold ctrl and click button one, fire the goto_definition
command." Done! (original source)
Note: I originally wanted to suggest using the
super
modifier, so it would be just like PHPStorm; however, that would override Sublime Text's "holdsuper
and click to get multiple cursors" behavior, so I didn't.
There's a package named Sublime PHPCS that brings PHP_CodeSniffer, PHP's linter, PHP Mess Detector, and Scheck (?) to bear on your code.
You can tweak all sorts of settings, but you're primarily either going to run it every time you save your file (good, but can get annoying), or every time you trigger it from the command palette (press super-shift-p
and then type until you get "PHP Code Sniffer: Sniff this file") or keyboard shortcut (ctrl-super-shift-s
by default).
You'll get gutter highlights and a list up top of all of the places your code doesn't satisfy the linter.
Note that this and any other packages that rely on code sniffing and linting will be requiring command line applications installed, so be sure to visit their sites and read their directions.
Interestingly, there's a relatively un-noticed plugin doing the same thing (but for PHPCS only) that's written by the same group that wrote PHP CodeSniffer, so it might be worth checking out as well; it's called PHP_CodeSniffer Sublime Text 2/3 Plugin (creative, I know.)
I've never used this one, though, so proceed with caution.
Mike Francis also shared a custom build script he wrote that runs PHP-CS-Fixer on your code whenever you trigger it. That means it'll actually enforce PSR-2 (or whatever other PHP-CS-Fixer standard you pass it) on your code for you.
Taylor Otwell actually shared this same script with me, but he didn't write it up as nicely as Mike did. :) He did, however, mention that you might want to set this preference: "show_panel_on_build": false,
This'll keep it from popping out the command panel with your results every time, which can get very irritating very quickly.
SublimeLinter PHP (and its required dependency, SublimeLinter) rely on PHP's built-in linter (just like the Sublime PHPCS plugin above). This is a simpler version that only runs the linter, nothing else.
If you're the type to use PHPStorm, there's a greater chance that you're the type to write Doc blocks. (Just sayin').
DocBlockr makes it simple to create new doc blocks, but more importantly, if you create a doc block just above a defined function, it will extract that function's parameter information and pre-fill it in your doc block. Boom.
Are you the type that hates switching from your IDE to your terminal/Git client? Sublime Text Git provides access to many Git commands directly from the Sublime Text command palette.
GitGutter shows you diff information regarding each line's status--has it been modified, inserted, or deleted?
This is not nearly as powerful as PHPStorm's Git gutters, but it's a step in the right direction.
There's a great plugin that makes it super easy to run PHPUnit from the command palette or a keyboard shortcut: SimplePHPUnit
Just like the name implies, you install the package and you're up and running.
CodeIntel is supposed to provide Sublime Text intelligence about the language you're working in. It should provide autocompletion, easy jump-to-definition, and information about the function you're currently working in.
Why do I keep saying "should" and "supposed to"? Because I have yet to meet a PHP developer who can get CodeIntel up and running consistently and predictably. Have you? Hit me up.
When I asked around on Twitter, plenty of folks shared plugins. Since I don't use these, I can only share them vaguely, but I'm sure they're all worth a quick check.
Do you miss the Xdebug integration in PHPStorm? Check out Codebug, a standalone xdebug client.
This post is not an introduction to all things Sublime Text, but I do want to cover a few important pieces here.
If you press super-P
you'll get the wildly powerful Goto Anything
palette, which allows you to easily find files, but you can go a bit further: if you find your file (e.g. by typing Handler.php
), you can also trigger opening it at a certain line (Handler.php:35
) or at a certain symbol (Handler.php@report
).
While the Goto Anything
palette lets you search for files in your project, the Command Palette allows you to search for commands.
This means that any command that Sublime Text lets you perform (run builds, rename files, etc.), but also those from third-party packages (Sniff this file, etc.) can be run purely from the keyboard, even if you don't know (or have) the keyboard shortcut.
If you press super-R
you'll get the Goto Symbol
palette, which will navigate to any symbol in your current file.
Symbols are things like classes, methods, or functions.
Many editors have added multiple cursors, but Sublime Text still does it the best.
If you've never tried it, go learn about it somewhere, but here's a quick intro:
Open up a file. Hold "super" (cmd on Mac) and click several places around the file. Now start typing. BOOM.
Another great trick: Place your cursor on a common word (for example, a variable name). Now press Super-D
a few times. You now have several instances of that variable selected and you can manipulate them all at once.
Or, select five lines and press Super-shift-l
. Check it.
There's a lot more you can do with this if you get creative.
Did you know that when you're using any of the command palettes in Sublime Text, you don't have to finish one word?
In most editors (like PHPStorm), if you wanted to find a file named resources/views/conferences/edit.blade.php
, you could type resources/views/conferences/edit.blade.php
or conferences/edit.blade.php
, but in Sublime Text all you would need is something like resvieconedblp
. Just type enough that the order of letters you're typing could only exist in the string you're looking for, and you'll be good to go. Skip a letter here, skip a slash there--no problem.
There's a lot more to learn about how Sublime Text works, and a lot of tools and courses available to you. This is not a comprehensive resource for everything that's great about Sublime; those guides have already been written.
If you want to learn more about Sublime Text, there are two excellent resources I'd consider checking out.
GEEK
coupon to get you $10 off (disclaimer: it helps me out, too.)Let's take a look at our list and see what we've handled:
Not bad, actually. Let's talk about what's missing:
What's my verdict? As always, it depends. I think it'll depend some on the project, some on the developer, and some on whether or not I can find solutions to some of the issues above. But I'm definitely leaning on Sublime Text a lot more than I was six months ago—it's just so darn fast.
Are there any Sublime Text tips for PHP developers that I missed? Let me know on Twitter.
Are there any PHPStorm features that I didn't cover here that you think are vital to every developer's toolkit? Let me know that too.
Also: I couldn't've written this without Adam Wathan, Taylor Otwell, Jeffrey Way, and many, many other friends on Twitter.
]]>Two of these traits have to do with the behavior of database migrations and database state during your testing. If you have never tested anything that relies on the database, it'll help you to know that it can be hard at times to ensure that your database gets into the right state before your tests run.
Here's how it normally goes: Before every test, you migrate your database tables. And after each test, you either wipe out the data that was added in the test, or if you're asking for pain, you don't and you hope that data doesn't break any tests that happen later in the testing process.
The DatabaseMigrations
trait simplifies this process for you. Here's the source for the trait:
<?php
namespace Illuminate\Foundation\Testing;
trait DatabaseMigrations
{
/**
* @before
*/
public function runDatabaseMigrations()
{
$this->artisan('migrate');
$this->beforeApplicationDestroyed(function () {
$this->artisan('migrate:rollback');
});
}
}
As you can see, it just migrates your data on setup, and then registers a direction to roll back the migrations when the application is torn down at the end of the test.
How do you use it?
<?php
use Illuminate\Foundation\Testing\DatabaseMigrations;
class ExampleTest extends TestCase
{
use DatabaseMigrations;
//
}
That's it!
Migrating up and down every test is a simple way to handle it, but it's hard to do that quickly unless you're working with SQLite. We do a lot of SQLite-based testing at Tighten, but it can be irritating at times to worry about some of SQLite's restrictions every time you write a migration.
If you want to test in MySQL, you might find more success wrapping each test in a transaction. It functions like this:
Every time a test is set up, it starts a database transaction. The test runs. And when the test is being torn down, the transaction is rolled back, which means the database is nice and pristine again.
Let's check out the source of the trait:
<?php
namespace Illuminate\Foundation\Testing;
trait DatabaseTransactions
{
/**
* @before
*/
public function beginDatabaseTransaction()
{
$this->app->make('db')->beginTransaction();
$this->beforeApplicationDestroyed(function () {
$this->app->make('db')->rollBack();
});
}
}
Just like above, it's running the "begin" operation, and then registering a callback to run the "stop" operation when the test is spinning down.
<?php
use Illuminate\Foundation\Testing\DatabaseTransactions;
class ExampleTest extends TestCase
{
use DatabaseTransactions;
//
}
In Laravel 5, CSRF middleware changed to be enabled by default on all routes. But you don't always want to worry about CSRF in the middle of a test. Or maybe you have some other custom middleware that you don't want to run.
There's now a trait you can add to your tests to turn off that Middleware so you don't have to worry about it in your tests:
You guessed it, we're going to start by looking at the source:
<?php
namespace Illuminate\Foundation\Testing;
use Exception;
trait WithoutMiddleware
{
/**
* @before
*/
public function disableMiddlewareForAllTests()
{
if (method_exists($this, 'withoutMiddleware')) {
$this->withoutMiddleware();
} else {
throw new Exception('Unable to disable middleware. CrawlerTrait not used.');
}
}
}
We're calling the withoutMiddleware
method if it exists. It's checking to make sure we're in a test that has CrawlerTrait
, which practically is just checking if we are working with integration tests by extending Laravel's TestCase
. And if so, it's disabling all middleware for these tests.
<?php
use Illuminate\Foundation\Testing\WithoutMiddleware;
class ExampleTest extends TestCase
{
use WithoutMiddleware;
//
}
Note that you can also just call $this->withoutMiddleware();
in a single test method if you only want to disable it within that method.
That's it. Three simple helper traits to make it even simpler to do integration testing in Laravel 5.1.
]]>$post = new Post;
$post->title = 'Fake Blog Post Title';
$post->body = 'Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nullam lorem erat, luctus at diam sed, dapibus facilisis purus. In laoreet enim nunc, ut pretium arcu scelerisque in. Nunc eu cursus nibh. Etiam pulvinar vulputate libero sed molestie. In condimentum varius faucibus. Vestibulum non blandit sapien, quis tincidunt augue. Aliquam congue sapien eget mattis sagittis.';
$post->save();
That's already a pain to write out in the middle of your test. But what about when you need to create multiples? To differentiate them? Create related items? It gets out of control.
If you've ever used TestDummy or Faktory (or factory_girl in Ruby), you're already familiar with using a "factory" to build fake entities for your testing.
If you haven't used them before, these are libraries that make it simple to create a pattern for how to generate fake entities for testing. You're defining to the system: "Every time I request an entity of class type, give me one with these properties filled out to these values."
Note: It's worth pointing out that the fields you provide by default shouldn't always be "every field available to this model", but rather those which are useful in your context. Especially in testing, there are often circumstances where you only want to set a minimum set of fields.
Want to learn more about how factories work, especially in testing? Check out Adam Wathan's article introducing model factories for a better overview about how to use them.
Let's take a look at an example. We'll use the Laravel 5.1 syntax:
// app/database/ModelFactory.php
$factory->define('App\Post', function () {
return [
'title' => 'My Awesome Post',
'body' => 'The Body'
];
});
You've just defined that, every time someone requests an entity of class App\Post
(and from now on, why don't we just write App\Post::class
instead of 'App\Post'
?) from the factory, they will receive a Post with title
of My Awesome Post
and a body
of The Body
. Pretty straightforward.
So, how do we request such an entity? Let's start with make()
:
$post = factory(App\Post::class)->make();
What do you think you're getting? An entity of class Post
, with the same title and body every time.
dd($post->toArray());
/* Returns:
array:2 [
"title" => "My Awesome Post"
"body" => "The Body"
]
*/
So, what if we want to create three objects for testing?
$posts = factory(App\Post::class, 3)->make();
Done. You now get a Collection of Posts.
dd($posts->toArray());
/* Returns:
array:3 [
0 => array:2 [
"title" => "My Awesome Post"
"body" => "The Body"
]
1 => array:2 [
"title" => "My Awesome Post"
"body" => "The Body"
]
2 => array:2 [
"title" => "My Awesome Post"
"body" => "The Body"
]
]
*/
You've probably noticed that having three Posts with the same data is less useful than we might want.
Enter the Faker library. It's always been a great tool for seeding and it's at times useful in testing; it makes it simple to generate structured, fake data for your fake entities.
It's even easier now: Faker is baked into Laravel 5.1. Check it out:
$factory->define(App\Post::class, function ($faker) {
return [
'title' => $faker->sentence,
'body' => $faker->paragraph
];
});
Faking, built right in. Now every entity you create with the factory will have a unique title and body:
$posts = factory(App\Post::class, 3)->make();
dd($posts->toArray());
/* Returns:
array:3 [
0 => array:2 [
"title" => "Ea quis animi ex eius in aut."
"body" => "Animi velit rerum corrupti quod nam consequuntur. Eius mollitia ut voluptatum laborum quod ex est. Id et aut aut molestias distinctio illo."
]
1 => array:2 [
"title" => "Illo quod doloribus placeat."
"body" => "Ea dolorem eligendi modi sit. Facilis incidunt et sequi velit quia. Ab ipsa dicta dolor doloribus."
]
2 => array:2 [
"title" => "Quod qui ea et quo."
"body" => "Iure atque vel rerum perspiciatis voluptatem eligendi provident molestiae. Porro aut est accusamus aut. Tempora quisquam ea delectus nihil hic quidem alias velit. Necessitatibus et illum quo culpa ad sint."
]
]
*/
Note: Faker is easily injectable by default, but if you want to use it, you still need to include it in your composer.json
as fzaninotto/faker
. Thanks to Eric Barnes for the tip!
Since we'll be using these model factories often in integration testing (or in testing database-backed chunks of the application), we're going to need to talk about how to persist these fake entities to the database. Thankfully, it's simple:
factory(App\Post::class, 20)->create();
That's it! Now you have 20 fake Posts in your database.
First, these work great in your database seeders. Just truncate the database and run factory(ClassName::class, 20)->create();
and you're seeded and ready to go.
But let's take a look at a testing example.
<?php
...
class PostRepositoryTest extends TestCase
{
public function test_it_paginates()
{
factory(App\Post::class, 50)->create();
$thisPage = (new App\PostRepository)->paginate();
$this->assertEquals(20, $thisPage->count());
}
}
I wrote earlier that these factories are good for integration testing, and they are; but as you can see above, they're also good for testing any smaller pieces (unit or functional) that access the database.
We inserted more than our pagination count, ran the all()
method, and tested our return. This test could only function properly with real data in a database, unless you are interested in mocking your entire Eloquent database access layer, which I wouldn't recommend.
Here's a simple integration testing example:
<?php
...
class PostListPageTest extends TestCase
{
public function test_list_page_paginates()
{
factory(App\Post::class, 50)->create();
$this->visit('/posts')
->see('Next Page');
}
}
We are only expecting the "Next Page" button to show up if there are more than our pagination count (20), so we added 50 and checked to see that the posts page shows the "Next Page" button.
Another option is to actually look at our individual items:
<?php
...
class PostListPageTest extends TestCase
{
public function test_list_page_shows_titles()
{
$post = factory(App\Post::class)->create();
$this->visit('/posts')
->see($post->title);
}
}
What if you're working with a particular model, and you want to set a particular value on it for testing purposes?
Just pass the override parameters as an array to the make
/create
method:
$user = factory(App\Post::class)->make([
'title' => 'THE GREATEST POST',
]);
What if you need to generate two different sorts of Post to test a particular condition against others?
Try defineAs
to allow yourself to specific a 'type' when you're generating fake entities. You can either create the two entirely separately:
$factory->defineAs(App\Post::class, 'short-post', function ($faker) {
return [
'title' => $faker->sentence,
'body' => $faker->paragraph
];
});
$factory->defineAs(App\Post::class, 'long-post', function ($faker) {
return [
'title' => $faker->sentence,
'body' => implode("\n\n", $faker->paragraphs(10))
];
});
Or, you can extend the base type with a customized type:
$factory->define(App\Post::class, function ($faker) {
return [
'title' => $faker->sentence,
'body' => $faker->paragraph
];
});
$factory->defineAs(App\Post::class, 'long-post', function ($faker) use ($factory) {
$post = $factory->raw('App\Post');
return array_merge($post, ['body' => implode("\n\n", $faker->paragraphs(5))]);
});
Leverage the magic of Illuminate Collections to quickly set up your fake entities and their relationships:
$posts = factory('App\Post', 3)
->create()
->each(function($post) {
$post->relatedItems()->save(factory('App\Item')->make());
});
Model factories have long been a powerful tool to aid testing and seeding. Having them built-in to the framework is now one less step in the way of us testing well and consistently.
Check back later this week for even more 5.1 testing goodies.
]]>Note: If you do a lot of testing, it's likely you already have Faker and Mockery installed on every site you run, but you just got saved a step: they're now installed by default.
For a quick refresher, integration tests are those which test your entire system as an integrated application, as compared against unit tests, which test each system of your application separately.
Usually integration tests pass in input to your application (often just an instruction like "visit this page") and check the output (often "I should see this text somewhere on the page"), with no concern of how that input was converted to that output. Integration tests see the actual processes running your application as a black box. Don't know, don't care.
Jeffrey Way's fantastic Integrated package has given integration tests in Laravel superpowers for a while now, and it's now a part of the Laravel core.
This means that any tests extending TestCase
provide a simple, fluent interface for you to operate what amounts to almost a fake web browser that can check your output. If you've ever written Selenium-based tests, think that but simpler and easier to set up.
I'll show a few examples here, but note that the full documentation is available at the docs.
visit()
and see()
Check this out:
<?php
...
class HomePageTest extends TestCase
{
public function test_home_page_says_wowee()
{
$this->visit('/')
->see('Wowee');
}
}
In two lines of code, we just tested that a user who visits the home page of the application sees the phrase "Wowee" somewhere on the site. Two lines of code. If you have any imagination, you can see how far we can take this, with almost no work, to ensure that the front end of our application functions properly, not just its guts.
seePageIs()
public function test_forwarder_forwards_the_page()
{
$this->visit('/forwarder')
->seePageIs('forwarded-to');
}
click()
public function test_cta_link_functions()
{
$this->visit('/sales-page')
->click('Try it now!')
->see('Sign up for trial')
->onPage('trial-signup');
}
type()
, select()
, check()
, attach()
, and press()
There are many more interactions you can script with the new functionality.
For example, you can fill out a form and submit it:
public function test_it_can_subscribe_to_newsletter()
{
$this->visit('/newsletter')
->type('me@me.com', '#newsletter-email')
->press('Sign Up')
->see('Thanks for signing up!')
->onPage('newsletter/thanks');
}
Note that press()
can either be passed the value of the button (press("Sign Up")
) or the name (press("sign-up-button")
)
You can also fill out other fields:
public function test_signups_can_complete()
{
$this->visit('/signup')
->type('Matt Stauffer', 'name')
->check('overTwentyOne')
->select('Florida', 'state')
->attach('../uploads/test.jpg', 'profilePicture')
->press('Sign Up')
->seePageIs('/signup/thanks');
}
submitForm()
public function test_login_form()
{
$this->visit('/login')
->submitForm('Log In', ['email' => 'me@me.com', 'password' => 'secret'])
->see('Welcome!')
->onPage('dashboard');
}
seeInDatabase()
public function test_saves_newsletter_signups()
{
$this->visit('/newsletter-signup')
->type('me@me.com')
->press('Sign up')
->seeInDatabase('signups', ['email' => 'me@me.com']);
}
Cool trick: you can tell Elixir to run your tests every time you modify a file by adding the following code to your gulp file: mix.phpUnit()
. Now, just run gulp tdd
from the command line and it will re-run PHPUnit every time you change any of your files and notify you of the results.
Thanks to Jeffrey Way for pointing this out—I had no idea it existed and now I'm in love with it.
As you can see, bringing in Integrated has laid the foundation for simple and powerful integration tests with almost no work. I have to make this clear: I've written integration tests before and it's never been this easy to get started.
We're not done yet—check back later this week to learn more about all the amazing things you can do with integration testing in Laravel 5.1.
]]>Today, let's take a look at the options Artisan commands present for input and output. Most of this is review; to get to what's new in 5.1, go to advanced output.
Note: Artisan commands build on top of the Symfony Console Component, so if you really want to geek out, you can go learn more there.
As a quick reminder, here's what a signature definition looks like in a 5.1 command object:
protected $signature = 'command:name
{argument}
{optionalArgument?}
{argumentWithDefault=default}
{--booleanOption}
{--optionWithValue=}
{--optionWithValueAndDefault=default}
';
Here are the methods that you can use within your command's handle()
method to get and display data:
$this->argument('argumentName')
The argument method allows you to get the value of an argument you defined in your parameter list.
So, if your signature definition is do:thing {awesome}
and the user runs php artisan do:thing fantastic
, $this->argument('awesome')
will return fantastic
.
Note that, if you're accessing an argument that's not required, and you haven't set a default, and the user doesn't fill anything out, this will come back as null
.
$this->option('optionName')
Just like the argument method, the option method gets the value of an option.
If the option is defined as a boolean option (--queued
, --doItAwesomely
), this will return as true
if passed or false
if not.
So, if your signature definition is go {--boldly}
and the user runs php artisan go --boldy
, $this->option('boldly')
will return true.
$this->argument()
and $this->option()
If you don't pass parameters to the argument and option methods, they'll each return an array of all of their defined parameters and their values.
So, if your route definition is jump:on {thing1} {thing2}
and your user runs php artisan jump:on rock boulder
, $this->argument()
will return this array:
[
'command': 'jump:on',
'thing1': 'rock',
'thing2': 'boulder'
]
Same thing for options.
When you're writing your handle()
method, it's common to want to send output to the end user. There are quite a few options for this.
All four of the simple output methods ($this->info()
, $this->comment()
, $this->question()
, and $this->error()
) allow you to pass any string to the user, as a notification:
$this->info('Finished syncing data');
Check out all the colors:
What if you want to write a script that allows you to have various stages of information retrieval from the user? Or conditional information retrieval depending on previous responses, or depending on part of the operations of the command?
$this->ask()
Throw a question out to the user and get their response:
public function handle()
{
$name = $this->ask('What is your name?');
$this->info("Hello, $name");
}
$this->secret()
Secret is the same as ask, but with hidden typing:
public function handle()
{
$password = $this->secret('What is your password?');
$this->info("This is really secure. Your password is $password");
}
$this->confirm()
What if you just need a yes/no?
public function handle()
{
if ($this->confirm('Do you want a present?')) {
$this->info("I'll never give you up.");
}
}
$this->anticipate()
and $this->choice()
What if you need custom choices? The Anticipate method allows you to provide autocompletion (but leaves the response free to be whatever the user wants), and the Choice method forces a choice between provided options.
public function handle()
{
$name = $this->anticipate(
'What is your name?',
['Jim', 'Conchita']
);
$this->info("Your name is $name");
$source = $this->choice(
'Which source would you like to use?',
['master', 'develop']
);
$this->info("Source chosen is $source");
}
And the result:
Laravel 5.1 introduces two new advanced output forms: table and progress bar.
$this->table()
The table method accepts two parameters: headers and data.
So, let's start with some hand-crafted goodness:
public function handle()
{
$headers = ['Name', 'Awesomeness Level'];
$data = [
[
'name' => 'Jim',
'awesomeness_level' => 'Meh',
],
[
'name' => 'Conchita',
'awesomeness_level' => 'Fabulous',
],
];
/* Note: the following would work as well:
$data = [
['Jim', 'Meh'],
['Conchita', 'Fabulous']
];
*/
$this->table($headers, $data);
}
Here's the output:
And as you can see in the docs, this is a great tool for easily exporting data with Eloquent:
public function handle()
{
$headers = ['Name', 'Email'];
$users = App\User::all(['name', 'email'])->toArray();
$this->table($headers, $users);
}
This is built on the Symfony Table Helper.
It might seem like magic, but outputting progress bars are actually really simple using the Symfony Progress Bar Component:
public function handle()
{
$this->output->progressStart(10);
for ($i = 0; $i < 10; $i++) {
sleep(1);
$this->output->progressAdvance();
}
$this->output->progressFinish();
}
This yields this beauty:
Let's break it down. First, we notify the progress bar how many "units" we'll be working through:
$this->output->progressStart($numUnits);
Then, every time we process a unit, we advance the progress bar by one:
$this->output->progressAdvance();
Finally, we mark it as complete:
$this->output->progressFinish();
Note that this syntax is a wrapper around the Symfony Progress Bar component. You can take a look there for more information about how it functions.
That's it. You're now a professional Artisan input/output coordinator. Put that on your resumé/CV.
]]>If you run php artisan
from the command line in any Laravel application directory, you'll see a list of all of the Artisan commands available for each app. As you can see, Laravel comes with quite a few enabled out of the box.
While Artisan commands are powerful and refreshingly encapsulated, prior to Laravel 5.1 they were a bit of a hassle to define. Let's look at the old way, and then check out the new.
For these examples, we'll be using a Symposium command that syncs down all of the conferences available on the Joind.in API.
You generate a new Artisan command using the following (Artisan!) command:
$ php artisan make:console SyncJoindInEvents
In Laravel 5 and earlier, that would pump out this boilerplate:
<?php namespace Symposium\Console\Commands;
use Illuminate\Console\Command;
use Symfony\Component\Console\Input\InputOption;
use Symfony\Component\Console\Input\InputArgument;
class SyncJoindInEvents extends Command {
/**
* The console command name.
*
* @var string
*/
protected $name = 'command:name';
/**
* The console command description.
*
* @var string
*/
protected $description = 'Command description.';
/**
* Create a new command instance.
*
* @return void
*/
public function __construct()
{
parent::__construct();
}
/**
* Execute the console command.
*
* @return mixed
*/
public function fire()
{
//
}
/**
* Get the console command arguments.
*
* @return array
*/
protected function getArguments()
{
return [
['example', InputArgument::REQUIRED, 'An example argument.'],
];
}
/**
* Get the console command options.
*
* @return array
*/
protected function getOptions()
{
return [
['example', null, InputOption::VALUE_OPTIONAL, 'An example option.', null],
];
}
}
As you can see, defining arguments and options uses a complicated syntax that leaves you needing to reference the docs every step of the way. We'll end up with this for now:
<?php namespace Symposium\Console\Commands;
use Illuminate\Console\Command;
use Symposium\JoindIn\Client;
class SyncJoindInEvents extends Command
{
protected $name = 'joindin:sync';
protected $description = 'Sync down Joind.in events.';
protected $client;
public function __construct()
{
parent::__construct();
$this->client = Client::factory();
}
public function fire()
{
if ($eventId = $this->argument('eventId')) {
$this->info("Syncing event $eventId");
return $this->client->syncEvent($eventId);
}
$this->info("Syncing all events");
return $this->client->syncAllEvents();
}
protected function getArguments()
{
return [
['eventId', InputArgument::OPTIONAL, '(optional) Joind.In event ID'],
];
}
}
Let's create this same command in Laravel 5.1. Here's our boilerplate:
<?php
namespace Symposium\Console\Commands;
use Illuminate\Console\Command;
class SyncJoindInEvents extends Command
{
/**
* The name and signature of the console command.
*
* @var string
*/
protected $signature = 'command:name';
/**
* The console command description.
*
* @var string
*/
protected $description = 'Command description.';
/**
* Create a new command instance.
*
* @return void
*/
public function __construct()
{
parent::__construct();
}
/**
* Execute the console command.
*
* @return mixed
*/
public function handle()
{
//
}
}
First of all, take a look at that gorgeous PSR-2 formatting. Breathe it in.
Second, look how much simpler the boilerplate is. But, Matt, how do we customize the arguments and options?
You'll notice that $name
has been replaced with $signature
, which says "the name and signature of the console command." This particular property of the command is where we define our arguments and options. So, here's our same command in Laravel 5.1:
<?php
namespace App\Console\Commands;
use Illuminate\Console\Command;
class SyncJoindInEvents extends Command
{
protected $signature = 'joindin:sync {eventId?}';
protected $description = 'Sync down Joind.in events.';
public function handle()
{
if ($eventId = $this->argument('eventId')) {
$this->info("Syncing event $eventId");
return $this->client->syncEvent($eventId);
}
$this->info("Syncing all events");
return $this->client->syncAllEvents();
}
}
So, what else can you do? Check the docs for everything you want to know, but here are some goodies:
joindin:sync {eventId}
joindin:sync {eventId?}
joindin:sync {eventId=all}
joindin:sync --wipeOldEvents
joindin:sync --afterDate=
joindin:sync --afterDate=1999-01-01
Note that you can also add descriptions inline:
protected $signature = 'joindin:sync
{eventId? : (optional) The ID of the event to sync}
{--wipeOldEvents : Whether to replace all locally-stored events with API results}';
There's plenty more you can do, most of which you are likely familiar with: $this->argument()
or $this->option()
to get the data out; $this->ask()
, $this->secret()
, $this->confirm()
, $this->anticipate()
, and $this->choice()
to prompt users; and $this->info()
and $this->error()
to output data.
There are also two new output functions: $this->table()
, and the $this->output->progress*
, which I'll cover tomorrow.
That's it! We can now create Artisan commands with ease, without needing the Artisan docs open every time we write out the argument and option syntax. Go forth and create!
]]>This is no longer the case. Middleware can now take parameters.
(and there was much rejoicing)
Remember, middleware is like a decorator that goes around your entire application request. It takes in a request, does some work, and spits out a response. And usually, it does that work consistently across every section of your application.
But what if you want to be able to customize exactly how the middleware is being processed for a given route, without creating a new middleware for every place it's customized?
Let's consider the most common example: Scoping authentication middleware based on roles. You want to, in the route definition, choose how the authentication middleware runs, by passing it a "role" parameter that defines which user role is required in order to access this route.
When you're adding middleware to a route definition, you'd normally set it like this:
Route::get('company', ['middleware' => 'auth', function () {
return view('company.admin');
}]);
So, let's add in our parameter to show that the user must have the owner
role:
Route::get('company', ['middleware' => 'auth:owner', function () {
return view('company.admin');
}]);
Note that you can also pass multiple parameters as a comma-separated list:
Route::get('company', ['middleware' => 'auth:owner,view', function () {
return view('company.admin');
}]);
So, how do we update our middleware to teach it to take parameters?
<?php
namespace App\Http\Middleware;
use Closure;
class Authentication
{
public function handle($request, Closure $next, $role)
{
if (auth()->check() && auth()->user()->hasRole($role)) {
return $next($request);
}
return redirect('login');
}
}
Note that the handle()
method, which usually only takes a $request
and a $next
closure, has a third parameter, which is our middleware parameter. If you passed in multiple parameters to your middleware call in the route definition, just add more parameters to your handle()
method:
public function handle($request, Closure $next, $role, $action)
NOTE: If you've never used middleware before, you need to ensure that this middleware is registered in the HTTP Kernel as a
routeMiddleware
—there's no way you could pass parameters to a universal middleware.
That's it! There's now no reason to use filters at all. I for one welcome our middleware overlords.
]]>// Blade template
@inject('service', 'App\Services\Service')
{{ $service->getSomething() }}
As you can see, the first parameter is the variable name, and the second parameter is the class or interface name or alias.
NOTE: You don't want to abuse this. There is such a thing as too much logic in your views, and it's a beast. But there are some circumstances in which instantiating a class in every controller just to pass them to the same view is a little bit too much work. Think about a context in which you might want a View Composer, but you only need a single class and binding a full View Composer might seem like too much work.
So, before you might've done this:
// DashboardController
public function index()
{
return view('dashboard')
->with('analytics', App::make('App\Services\Analytics'));
}
// dashboard.blade.php
// Template content...
@include('user.partials.finances-graph', ['analytics' => $analytics])
// Template content...
// UserController
public function showFinances()
{
return view('user.finances')
->with('analytics', App::make('App\Services\Analytics'));
}
// user/finances.blade.php
// Template content...
@include('user.partials.finances-graph', ['analytics' => $analytics])
// Template content...
// user/partials/finances-graph.blade.php
<h3>Finances</h3>
<div class="finances-display">
{{ $analytics->getBalance() }} / {{ $analytics->getBudget() }}
</div>
As you can see, we have two different controllers, pulling in two different templates, but those templates are both including the same partial template that needs the statistics service.
Let's rework this. Since it's just a single service, we'll inject the service into our template instead of creating a View Composer:
// DashboardController
public function index()
{
return view('dashboard');
}
// dashboard.blade.php
// Template Content...
@include('user.partials.finances-graph')
// Template Content...
// UserController
public function showFinances()
{
return view('user.finances');
}
// user/finances.blade.php
// Template Content...
@include('user.partials.finances-graph')
// Template Content...
// user/partials/finances-graph.blade.php
@inject('analytics', 'App\Services\Analytics')
<h3>Finances</h3>
<div class="finances-display">
{{ $analytics->getBalance() }} / {{ $analytics->getBudget() }}
</div>
That's it! You're now a Service Injection professional.
]]>In this post, we'll be creating a series of welcome emails for every new subscriber. Before we get to a full drip campaign, let's look at the simple "thank you message" each list can have by default.
Log into your Sendy account, and visit your brand page by clicking the brand title.
Click on "View all lists" in the left rail, and choose the "Edit" icon for the list you want.
The first big chunk you'll see is, on the left, the settings for Single vs. Double Opt-in (keep it at single for now) and the subscribe success page, and on the right, the "thank you" email.
As you can see, this is just like any other email you create in the app. Set a subject, build a body (or paste in HTML), check the "Send user a thank you email after they subscribe through the subscribe form or API?" checkbox at the top, and it'll automatically be sent upon signup.
If you're using the Double Opt-in option, you'll have to have an email to provide a confirmation link for them to click.
Thankfully, Sendy has a default message that it'll send for you; however, if you want to customize the confirmation message, you can edit it below the "thank you" message section. Be sure to use the [confirmation_link]
placeholder somewhere in your custom email.
A "drip campaign" is any sort of schedule of emails that are sent, not at the same time to every user, but triggered based on an event. So where a normal newsletter email is sent on June 5, 2015 for every subscriber, a drip campaign might send an email on "the first day after signup" for each unique user.
Usually a drip campaign will happen in response to one of two events: Either upon first signup (which we're talking about here), or after buying a paid training course. Either way, the trigger moment kicks off a series of emails spaced out a certain amount of time after the triggering event.
Drip campaigns (or even a single welcome email that's not an immediate "thank you") are achieved in Sendy via autoresponders.
Go back to the "View all lists" page, and click the name of your list. In the upper right hand corner, you'll see the text "Autoresponders". Click that and you'll be able to set up timed emails.
As you can see, you can send off autoresponders based on custom fields, but right now we'll just choose "Drip campaign," which triggers a drip campaign based on the user's signup date. Give your first autoresponder message a name, and then you'll find yourself on the screen for the email editor.
Once you've gotten this far, you see we're back at our normal email editor, with one unique piece: The ability to set when the email is sent out:
From here, you can just create as many emails as you want in your drip campaign, and space them out from the initial signup date.
Sendy considers a single "drip campaign" autoresponder as capable of having multiple emails, so you won't be creating a new autoresponder each time, but rather adding multiple emails to the existing autoresponder:
That's it! You now have the power to create unique, custom drip campaigns for your newsletter to help onboard your new newsletter subscribers.
]]>So, time to get my newsletter-sending-setup up and running.
If I reach the number of subscribers I hope to reach, Campaign Monitor and Mailchimp will get far too expensive, far too fast, for something I'm just doing on the side.
So, I asked around on Twitter, and quite a few people recommended Sendy, which is a self-hosted PHP application that costs $59 (one-time) and then sends using Amazon SES, which is extremely affordable.
Please note: The link to Sendy above is an affiliate link, which means I'll make a commission if you sign up. This is not a paid review, and I planned this blog post before I knew they had an affiliate program; but if you're already going to consider signing up, I won't complain if Sendy sends me a commission. :)
So, of course, being the dork I am, I figured I'd install Sendy on Laravel Forge and then write a blog post about how to do it. Many thanks to Eric Barnes, who had walked this road before me, and Chris Fidao, who's just been helpful in general.
Note: Are you unfamiliar with Laravel Forge? Check out my Laravel Forge series.
Your first step is to buy Sendy. You'll receive a Zip file download, and a license key. Save the license key somewhere safe, and unzip the file locally.
We'll be using Git to deploy the site (I originally wrote directions using SCP and, trust me, it's not worth it), so initialize a new git repository in the unzipped sendy
folder and add everything to it.
Also, add a new, blank file to the uploads
directory named .gitkeep
and add that to Git too. That'll ensure the (currently empty) uploads directory gets uploaded to Forge.
$ cd Downloads/sendy
$ touch uploads/.gitkeep
$ git init
$ git add .
$ git commit -m "Initial commit."
$ git remote add origin {remote-repository-URL}
$ git push origin master
Note: If you don't have a paid Github account, I'd recommend checking out Bitbucket, as they provide free private repos.
You'll need to set up a Forge site for your Sendy server.
Let's assume you're using sendy.mattstauffer.co
(be sure to pick a domain/subdomain that will work for all of your projects--this one domain is going to be used for every Sendy "brand" you set up):
Go add a site to one of your Forge servers (if you don't have one yet, create your first server with Forge), with the Root Domain set to your domain (e.g. sendy.mattstauffer.co
) and the Web Directory empty (not the default /public
).
Now, you can just hook the site up to your Git repository.
Hit the "Edit Deploy Script" button to tweak your deploy script. Set it to the following:
$ cd /home/forge/sendy.mattstauffer.co
$ git pull origin master
(Of course, replace my site domain with yours).
Since this project doesn't use Laravel or Composer, we're just dropping the lines from the default deploy script that run the Composer install and Laravel migrations.
Now, turn on auto-deploy (if you're using Bitbucket, set up the Deployment Trigger URL; if you're using Github, just hit "Enable Quick Deploy") and hit the manual Deploy Now button to get the code up on the server.
nginx.conf
Because Sendy was designed for Apache, we have to do a bit of custom tweaking for Nginx. This script comes from Eric Barnes, who scraped it from the Internet somewhere.
When you're managing your site in Forge, there's an icon in the lower right hand corner of the screen that looks like a pen and paper; tap it, and choose "Edit Nginx Configuration."
Paste in Eric's script, and replace site.com
with your URL--for example, sendy.mattstauffer.co
.
SSH into your Forge server, cd
to your Sendy directory, and chmod 777
your uploads
folder:
$ ssh forge@my-forge-server-public-id
$ cd sendy.mattstauffer.co
$ chmod 777 uploads/
Copy your Forge server's Public IP address. Now, visit your DNS provider and add a A-Name record for the domain or subdomain. Point it to the Forge server Public IP address.
Visit Sendy's Get Started Guide and follow the directions.
Make sure to remember to add a database to your Forge server for the Sendy install to use.
Also note that the AWS/SES setup for Sendy is a bit complicated; if you have a lot of trouble with it, hit me up on Twitter and I might write a separate blog post on that.
Manage the server (not the site) for your Forge install and go to the Scheduler tab.
Add two cron jobs, one for the scheduler and one for autoresponders.
*/5 * * * * php /home/forge/sendy.mattstauffer.co/scheduled.php
* * * * * php /home/forge/sendy.mattstauffer.co/autoresponders.php
The autoresponder should run every minute, so there's already a default for that.
Note that the scheduler should run every 5 minutes, so you'll want to choose the "custom" frequency and set it to */5 * * * *
.
Once you add them both, it should look like this:
Sendy doesn't come with any HTML templates out of the box, so it's going to be a little more work than you're used to to create your content.
Sendy does have a WYSIWYG editor inline, so you can create very simple newsletters easily, but if you want to go a little more complicated, you'll have to do some work on your own.
I'll be writing an article soon about how I create mine, but for a quick introduction, check out Eric Barnes' writeup on how he creates the Laravel-News newsletter.
"You're telling us all about how to create an email newsletter and you haven't pitched it yet?"
Well, boy howdy, you're right!
Friend, you should sign up for my new newsletter! I'll share my best thoughts and advice about how you can uniquely do good--wherever you are, whether you're a designer, a developer, a project manager, entrepreneur, or whatever else. I want to help you do what you do, whatever it is, the best you can.
That's it! You now have a fully-functional install of Sendy running on your Laravel Forge server.
From this point forward, it's just Sendy-as-usual: create a list and get people signed up, create a campaign, test send, send, profit.
That's it for this post--enjoy! Check back soon (or, sign up for my newsletter!) and I'll be posting more soon about how to tweak Sendy and how to create your own templating system with Laravel.
]]>Disclaimer: If you do this, you're going to be installing an under-development, almost- guaranteed-to-have-bugs version of Laravel. Things will break, and you can't expect any customer support until it's released.
The simplest option is to just to use Composer to install a fresh app from the develop
branch:
$ composer create-project laravel/laravel your-project-name-here dev-develop
If you need to upgrade, it's a little more complex, because you might find dependency issues, and you won't have the upgrade guide to show you what changes you need to make to your files. But if you're willing to risk it:
minimum-stability
property in your composer.json
to be dev
. If it doesn't exist, just add it to the bottom of composer.json
("minimum-stability": "dev"
)laravel/framework
version to be the version you want to use. So, if you're running 5.0
and want to try out 5.1
, update it to be: "laravel/framework": "5.1.*"
That's it! Enjoy!
]]>Important Note: Many SSL certificate providers now generate certs which work on both the www. and the non-www. version of your domain. If you have a provider that does this (RapidSSL does, and I've heard that Comodo does as well), follow these instructions, but instead of buying two certs, just use the same Forge cert ID in both locations.
I ran into an issue this week that ended up with some visitors to Karani seeing security errors in their browsers when they visited via a particular URL. Not good!
In Forge, if you set up karaniapp.com
as a site, www.karaniapp.com
will forward there. But if you buy a non-wildcard SSL cert for karaniapp.com
, it won't work for www.karaniapp.com
, so if someone types https://www.karaniapp.com/
, it'll give a security error.
The fix? Add an SSL cert for www.karaniapp.com
too.
Just like normal, generate a CSR in Forge, order a cert for www.karaniapp.com
, and install it, but don't activate it (because if you activated this new SSL cert, that would deactive your primary SSL cert for karaniapp.com
).
Instead, ssh into your server. sudo vim /etc/nginx/sites-available/www.karaniapp.com
(or whichever domain you're adding the non-primary SSL cert to). What we're doing here is using vim
(a command line editor; you can use pico or emacs or whatever else) to edit the Nginx configuration file for this site.
By default you'll just see the non-HTTPS config for a site redirect:
server {
listen 80;
server_name www.karaniapp.com;
return 301 $scheme://karaniapp.com$request_uri;
}
You'll want to add the HTTPS redirect config in here, just below the closing brace, manually.
server {
listen 80;
server_name www.karaniapp.com;
return 301 $scheme://karaniapp.com$request_uri;
}
server {
listen 443 ssl;
server_name www.karaniapp.com;
# FORGE SSL (DO NOT REMOVE!)
ssl on;
ssl_certificate /etc/nginx/ssl/karaniapp.com/12345/server.crt;
ssl_certificate_key /etc/nginx/ssl/karaniapp.com/12345/server.key;
return 301 $scheme://karaniapp.com$request_uri;
}
Notice that there's a number (12345
in this example) in the middle of the ssl_certificate
and ssl_certificate_key
paths. Where do you get the number from?
Log into Forge, edit your site, click the SSL Certificates tab, and scroll down to the bottom. Find the Cert Path for your non-primary SSL cert and grab the number from there.
Save that file and restart Nginx. You can either sudo service nginx restart
from the command line, or visit the server in Forge, and click the refresh icon, and choose "restart Nginx".
That's it!
]]>TL;DR Make your API SSL/HTTPS and you're good to go.
Tighten built an app for a client recently using an Angular frontend and a Laravel API backend. They were running on separate domains, and worked fine for us. But when the client tried to use them, they could list and show their content but not create, edit, or delete it.
We originally thought the issue was CORS, so we wasted far too much time on that. See the Postscript about CORS to learn more about that.
I was finally able to get on a screenshare with the client, so I had them open the Chrome Web Inspector. We saw that the XHR (AJAX) requests that had the header of GET
and OPTIONS
were working fine, but PUT
(create), PATCH
(edit), and DELETE
(delete) were failing. They were showing up in Chrome as red with an error saying net::ERR_EMPTY_RESPONSE
, which means there was nothing coming back. No error message, no status code, nothing.
I downloaded the HAR of each to make sure there was nothing different in our OPTIONS
, diffed them in Kaleidoscope, and found there was nothing of significance. The browser sent an OPTIONS
request, the server sent back a reply saying it was OK to do all of those things, the browser sent a PATCH
/PUT
/DELETE
request, and then the response on my machine (not behind firewall) was fine and the response on the client's (behind firewall) was completely empty:
"response": {
"status": 0,
"statusText": "",
"httpVersion": "unknown",
"headers": [],
"cookies": [],
"content": {
"size": 0,
"mimeType": "x-unknown"
},
"redirectURL": "",
"headersSize": -1,
"bodySize": -1,
"_transferSize": 0,
"_error": "net::ERR_EMPTY_RESPONSE"
},
You don't get any more nothing than that. So we began to suspect that this client's server was disallowing PUT
/PATCH
/DELETE
requests, since they're a tiny bit more advanced and less common.
Since Laravel's request and response objects are extensions of Symfony's, I could take advantage of Symfony's X-HTTP-Method-Override
header. If you use Laravel or Symfony, you might be familiar with how this works on a web form: You add a hidden field named _method
with a value of PUT
or PATCH
or DELETE
, submit the form via POST, and then Symfony/Laravel treat it as a PUT
/PATCH
/DELETE
request.
If you're making a request that's not a form--for example, an AJAX request to an API--you want to do things a little differently. You want to add a header named X-HTTP-method-Override
with the value of your desired method.
So, we went into Angular and changed our requests from this:
factory.patch = function(form) {
return $http({
url: AppSettings.base + AppSettings.dataType.all + '/' + form.id,
data: form,
method: "PATCH"
});
}
to this:
factory.patch = function(form) {
return $http({
url: AppSettings.base + AppSettings.dataType.all + '/' + form.id,
data: form,
method: "POST",
headers: {
"X-HTTP-Method-Override": "PATCH"
}
});
}
At that point, we were now sending POST
requests, which means they were able to make it past the client's PUT
/PATCH
-stripping proxy.
I asked on Twitter the next morning, "Has anyone ever heard of a corporate web proxy that disallows PUT and PATCH requests? Think we might be running into one but not sure." I received quite a bit of affirmative support--this is, indeed, a thing that happens often.
https://twitter.com/stauffermatt/status/586510133296934913
Several folks pointed out that, if you use HTTPS, the proxy can't even see the method, so it can't reject certain methods. (StackOverflow)
All APIs should be HTTPS anyway, so this is why this is something I've never run into before. The only reason the site we were working on wasn't HTTPS was because it was a staging server. I've now learned my lesson. Even our staging servers are getting HTTPS.
At first we thought this was a CORS error, so we fussed with CORS for ages. This is a Laravel 4 app, so this is what our configuration looked like. First, the Middleware:
<?php namespace App\Http;
use Symfony\Component\HttpKernel\HttpKernelInterface;
use Symfony\Component\HttpFoundation\Request as SymfonyRequest;
class Cors implements HttpKernelInterface
{
protected $app;
public function __construct(HttpKernelInterface $app)
{
$this->app = $app;
}
public function handle(SymfonyRequest $request, $type = HttpKernelInterface::MASTER_REQUEST, $catch = true)
{
// Handle on passed down request
$response = $this->app->handle($request, $type, $catch);
$response->headers->set('Access-Control-Allow-Origin' , '*', true);
$response->headers->set('Access-Control-Allow-Methods', 'GET, POST, PATCH, PUT, DELETE, OPTIONS, HEAD', true);
$response->headers->set('Access-Control-Allow-Headers', 'Content-Type, Accept, Authorization, X-Requested-With', true);
if ($request->getMethod() == 'OPTIONS') {
$response->setStatusCode(200);
$response->setContent(null);
}
return $response;
}
}
I'm still unsure of whether the "if request is OPTIONS" block is entirely necessary, but I was trying everything I could here.
Then, the binding:
<?php namespace App\Http;
use Illuminate\Support\ServiceProvider;
class CorsProvider extends ServiceProvider
{
public function register()
{
$this->app->middleware(Cors::class);
}
}
Finally, registering the Service Provider in app/config/app.php
--I added SocialPack\Http\CorsProvider::class,
to the bottom of the providers
array.
Now we could sniff our response headers and see that we were getting them back just like we would want for correct CORS-ification.
When verifying our CORS settings didn't fix it, we tried putting the admin panel on the same domain. We moved the Angular app to api.ourServer.com/admin
so that it was making calls from the same server.
Even that didn't fix it, which was when I realized it wasn't CORS at all.\
Quick note: The primary developer on this project was Benson Lee, and he was with me every step of the way of discovering this solution. Gotta give credit where credit's due. :)
]]>These are tools that use websockets to open a direct connection to your user's web browser so that you can push events directly to the user without reloading their page view.
If you've ever been on a web page and gotten "push" notifications of events (for example, when Laravel Forge updates the status of your server without you reloading the page), it's likely it was using websockets to open the connection between your browser and a server somewhere. Pusher.com is a hosted SaaS that makes it super easy to set it up, but you can also set up your own server using Socket.io.
I'll save you from most of the technical details, but just know: There's a direct connection being opened between a web browser and a backend server. The server can push "events" (which each have a name and optionally a payload of data) along "channels". So, for example, Forge might have a "server-status" channel which can push out an event every time a server's status changes.
So if you set up Pusher.com to handle your websockets, you'll install a Javascript-based client on the frontend, and then use the Pusher PHP SDK to "push" events from your server to Pusher.com, which will then push them to the client.
If you're not familiar, check out how Laravel 5 events work. So, we now know that we can fire events--in the Forge event, maybe it would be a ServerUpdated
event.
// ServerControllerOrSomething.php
public function update()
{
// Do updating stuff...
// Now send event
event(new ServerUpdated($server));
}
In the past, if you wanted to push a websocket notification to your users here, you would pull in the Pusher SDK, and manually send a notification over to Pusher.com in the event handler.
Now, you just add two things to your event: a ShouldBroadcast
interface and a broadcastOn
method.
Check it out:
<?php
use Illuminate\Contracts\Broadcasting\ShouldBroadcast;
class ServerUpdated implements ShouldBroadcast
{
public $server;
public function __construct($server)
{
$this->server = $server;
}
public function broadcastOn()
{
return ['server-updates'];
}
}
As you can see, the broadcastOn
method just sends back an array, and as you can guess from what we talked about earlier, this array is a list of all of the Pusher channels we want to broadcast this event on.
Every public property on your event will be sent along as a part of the Pusher payload. Protected or private properties won't be sent along.
Note that we passed in an Eloquent object $server
on ours; since Eloquent objects are JSONable
, the $server
object will be converted to JSON and delivered with the payload.
You'll want to follow the Pusher directions to get your client code up and running, but you'll end up with something like this:
var serverChannel = pusher.subscribe('server-status');
serverChannel.bind('ServerUpdated', function(message) {
console.log(message); // Full payload
});
There's a new config/broadcasting.php
configuration file that allows you to set up your connections and pass in which each instance should be using.
The three possible drivers right now are Pusher.com, Socket.io, and log
, which just writes it out to a local log file for testing:
[2015-04-28 20:00:00] local.INFO: Broadcasting [ServerUpdated] on channels [server-status] with payload:
{
"server": {
"id": 1
}
}
Now there are even less barriers getting in the way of you adding websockets to your app. Set your Event to broadcast, plug it into a Pusher.com account, and then pull in the Pusher client on your frontend and you're up and running!
]]>If you spin up a new app on Heroku or Forge, they come up with a random name like "airy-craig" or "soaring-peaks". I figured, why don't we create a microservice that just provides these sort of names for any consuming application?
We're going to be creating our names as variations on the phrase "Happy Brad" (in honor of my dear friend Brad), and we'll use a thesaurus API to give us synonyms to the adjective we pass in. So, for example, http://www.happy-brad-naming-service.com/happy/brad
would return something like:
{
"result": "Elated Brad"
}
while http://www.happy-brad-naming-service.com/sad/brad
might return something like:
{
"result": "Depressed Brad"
}
If you haven't installed the Lumen installer yet, do that:
composer global require "laravel/lumen-installer:~1.0"
Now, just create a new Lumen project:
cd Sites
lumen new happyBrad
cd happyBrad
cp .env.example .env
The first thing I'll do is edit bootstrap/app.php
. I'll definitely want to enabled the Dotenv::load
line, and I'll be using Façades here, so I'll uncomment $app->withFacades();
as well.
Now run php artisan serve
and you'll be serving a site at localhost:8000
. Open that up in your browser, and then jump back to your code editor.
Let's edit app/Http/routes.php
to set up our first route.
<?php
$app->get('/{adjective}/brad', function ($adjective) {
return response([
'result' => ucwords($adjective) . ' Brad'
]);
});
Great. We're now taking in site.com/adjective/brad
and we get a JSON response of {"result": "Adjective Brad"}
.
All we have to do now is plug in the synonym generator. I just googled "thesaurus API", and picked the Big Huge Thesaurus API. I applied for a free API key, and now I'm ready to go.
I could pull in Guzzle or write an SDK, but since this is so quick-and-dirty, let's just use file_get_contents()
for now.
The URL I'm trying to create is http://words.bighugelabs.com/api/{version}/{apiKey}/{term}/json
.
First, I need to save my thesaurus API key as an environment variable, so I add a line to .env
and to .env.example
that's THESAURUS_KEY
, and in .env
I'll set it equal to my actual thesaurus API key.
Second, I need to get that key in my route.
$apiKey = getenv('THESAURUS_KEY');
Now, let's construct our URL:
$url = sprintf(
"http://words.bighugelabs.com/api/2/%s/%s/json",
$apiKey,
urlencode($adjective)
);
Then we can get it:
$result = json_decode(file_get_contents($url));
I know this will return something like this:
{
"adjective": {
"syn": [
"felicitous",
"glad",
"well-chosen"
],
"ant": [
"unhappy"
],
"rel": [
"euphoric",
"joyous"
],
"sim": [
"riant",
"prosperous"
]
}
}
So for now, I just want the array of synonyms. So, we'll pull in $result->adjective->syn
.
Then, I want to randomly pull out a single adjective.
$synonyms = $result->adjective->syn;
$synonym = $synonyms[array_rand($synonyms)];
Finally, let's return our response:
return response([
'result' => ucwords($synonym) . ' Brad'
]);
Let's put that all together:
$app->get('/{adjective}/brad', function ($adjective) {
$apiKey = getenv('THESAURUS_KEY');
$url = sprintf(
"http://words.bighugelabs.com/api/2/%s/%s/json",
$apiKey,
urlencode($adjective)
);
$result = json_decode(file_get_contents($url));
$synonyms = $result->adjective->syn;
$synonym = $synonyms[array_rand($synonyms)];
return response([
'result' => ucwords($synonym) . ' Brad'
]);
});
I'm going to be hitting the API a lot here, though, so let's cache our response using Cache::remember
.
$app->get('/{adjective}/brad', function ($adjective) {
$apiKey = getenv('THESAURUS_KEY');
// Forever cache, because synonym lists are
// likely never going to change
$cacheTtl = 0;
$synonyms = Cache::remember(
$adjective,
$cacheTtl,
function() use ($adjective, $apiKey)
{
$url = sprintf(
"http://words.bighugelabs.com/api/2/%s/%s/json",
$apiKey,
urlencode($adjective)
);
$result = json_decode(file_get_contents($url));
$synonyms = $result->adjective->syn;
return $synonyms;
}
);
$synonym = $synonyms[array_rand($synonyms)];
return response([
'result' => ucwords($synonym) . ' Brad'
]);
});
You can use this tool yourself here:
http://happy-brad.fiveminutegeekshow.com/adorable/brad
And check out the code here:
https://github.com/mattstauffer/synonym-namer
And finally: If you were really using this service, A) you'd need a much bigger sample source to draw from, and B) you'd need to do much more error handling. This is just for fun.
]]>Lumen has the same foundation as Laravel, and many of the same components. But Lumen is built for microservices, not so much for user-facing applications (although it can be used for anything.) As such, frontend niceties like Bootstrap and Elixir and the authentication bootstrap and sessions don't come enabled out of the box, and there's less flexibility for extending and changing the bootstrap files.
Lumen is for projects and components that can benefit from the convenience and power of Laravel but can afford to sacrifice some configurability and flexibility in exchange for a speed boost.
Lumen is targeted at microservices--small, loosely-coupled components that usually support and enhance a core project. Microservices are separated components with bounded contexts (meaning they have well-defined interfaces between each other), so in a microservice architecture you might have several small Lumen apps that support another, possibly Laravel-powered, app.
So as not to talk too much about microservices and Lumen at the same time, let's just start by providing a simple caching layer in front of a slow or unreliable external service. I often work with external data sources--APIs, for example--that need transformation and/or caching. As a result, I often build small single-purpose applications that sit between one source of data and my consuming code.
I'll often use Laravel for these applications, which is fine, but there is a lot of extra code that comes with stock Laravel that I don't need for a microservice, let alone one of these little single-purpose caches. So, let's build one of these using Lumen.
One simple way I've provided a cache layer in the past is just to route all of my calls through this layer, cache the results, and serve from the cache. Let's try that out.
Lumen has a simple installer just like Laravel's. You can pull it in globally:
composer global require "laravel/lumen-installer=~1.0"
Now, you can run lumen new MyProject
and it'll create that folder and create a lumen project in there for you.
cd Sites
lumen new my-cache
cd my-cache
OK, so now we're in our Lumen install. You can check out php artisan
to see what commands we have available, or php artisan serve
to spin up a web server at localhost:8000
that's serving your site.
Now, I just want to pass through all of my calls directly. Let's get this app running.
In Laravel, everything just works out of the box. That's pretty true in Lumen, too, but you're going to want to take a first glance at bootstrap/app.php
. There are a few options you can enable in here--they'll look like a commented-out line of code that you can turn on by un-commenting it.
Because I'll be wanting to use Laravel's Façades and .env
environment variables, I'll un-comment those lines, which look like this:
// Dotenv::load(__DIR__.'/../');
and this:
// $app->withFacades();
You can scroll through this file and see places you can enable Eloquent, route and global middleware, and service providers.
Next, let's go to app/Http/routes.php
. Note that routes in Lumen use nikic/FastRoute instead of the Illuminate Router, so things will look a little different.
Let's create a route to capture every route that's passed through.
$app->get('{path:.*}', function($path)
{
echo 'You just visited my site dot com slash ' . $path;
});
If you're familiar with Laravel, you may notice that that above route in Laravel would be:
$router->get('{path?}', function($path)
{
echo 'You just visited my site dot com slash ' . $path;
})->where('path', '.*');
But essentially, we're capturing every path and passing it in as the $path
variable.
Now, we can set up our API caller. Skip the next two paragraphs if you don't care about the API caller--it's not entirely necessary to understand this example.
Note that I'm using a generic PassThrough
class I wrote that is constructed with a base URL (e.g. http://api.mysite.com/v1/
), and has a getResultsForPath
method which takes a path (e.g. people/145
), and returns a result that's an array with headers
, body
, and status
. It operates the same as the fakeApiCaller
class I described in this blog post.
So, we're defining which headers we do and don't want to return; the root URL for our API calling; and then we're passing the path to the caller, getting a response, and creating an Illuminate response using Laravel's response()
helper, which takes the parameters of body
, status
, headers
.
$app->get('{path:.*}', function ($path) use ($app)
{
// Configure
$headersToPass = ['Content-Type', 'X-Pagination'];
$rootUrl = 'http://www.google.com/';
// Run
$passThrough = $app->make('App\PassThrough', [$rootUrl]);
$result = $passThrough->getResultsForPath($path);
// Return
return response(
$result['body'],
$result['status'],
array_only(
$result['headers'],
$headersToPass
)
);
});
Notice that we're passing the $app
instance around, which we can use to resolve objects out of the IOC container or whatever else.
Finally, let's, cache the results, and we're good to go.
$app->get('{path:.*}', function ($path) use ($app)
{
// Configure
$cacheTtl = 60;
$headersToPass = ['Content-Type', 'X-Pagination'];
$rootUrl = 'http://www.google.com/';
// Run
$result = Cache::remember(
$path,
$cacheTtl,
function() use ($path, $app, $rootUrl)
{
$passThrough = $app->make('App\PassThrough', [$rootUrl]);
return $passThrough->getResultsForPath($path);
}
);
// Return
return response(
$result['body'],
$result['status'],
array_only(
$result['headers'], $headersToPass
)
);
});
That's it! We now have a blazing fast caching mechanism in front of any site. Create your PassThrough
class, use Guzzle to construct and call the path, and then split out the Guzzle response to the expected shape, and you're good to go.
Clearly this is a very simple use-case. Lumen is targeted at microservices, so it's more likely to be used when you're separating out a single high-use piece of your application. It might become your API server, or push (or pull) from your queues. It might collect data from multiple places and then serve it out in a normalized manner. If it's a single component, especially if it's high traffic, it may be worth trying it out with Lumen.
Lumen is Laravel, stripped down for speed. It doesn't bother with views and sessions and other consumer-facing conveniences--it's optimized for speedy, trim, microservices. Try it out. And check out the docs, too, for a ton of more information about how to use Lumen.
ArtisanGoose has also written a blog post about how he's using Lumen as a submodule within his Laravel application.
]]>Here's my code. I write these little microservices all the time--they just sit in front of another service and cache it. That's it. Sometimes I modify the output or the headers, sometimes I crunch some things or mangle others. But the simplest form is just caching a slow or unreliable service.
<?php // routes.php
class fakeApiCaller
{
public function getResultsForPath($path)
{
return [
'status' => 200,
'body' => json_encode([
'title' => "Results for path [$path]"
]),
'headers' => [
"Content-Type" => "application/json"
]
];
}
}
Route::get('{path?}', function($path)
{
$cacheLengthInMinutes = 60;
$headersToPassThrough = [
'Content-Type',
'X-Pagination'
];
if (Cache::has($path)) {
$result = Cache::get($path);
} else {
$myCaller = new fakeApiCaller;
$result = $myCaller->getResultsForPath($path);
Cache::put($path, $result, $cacheLengthInMinutes);
}
// Pass through specified headers
$headers = [];
foreach ($headersToPassThrough as $header_name) {
if (array_key_exists($header_name, $result['headers'])) {
$headers[$header_name] = $result['headers'][$header_name];
}
}
return response($result['body'], $result['status'], $headers);
})->where('path', '.*');
It's clear, and I felt pretty good about it not wasting too much time or code. A bit long, yes, but whatever.
And then Taylor passed this back:
<?php // routes.php
class fakeApiCaller
{
public function getResultsForPath($path)
{
return [
'status' => 200,
'body' => json_encode([
'title' => "Results for path [$path]"
]),
'headers' => [
"Content-Type" => "application/json"
]
];
}
}
$app->get('{path?}', function($path)
{
$result = Cache::remember($path, 60, function() use ($path) {
return (new fakeApiCaller)->getResultsForPath($path);
});
return response($result['body'], $result['status'], array_only(
$result['headers'], ['Content-Type', 'X-Pagination']
));
})->where('path', '.*');
This takes advantage of several things mine didn't.
First, he used $cache->remember()
, which was a Laravel feature I didn't even know existed. Remember is for exactly what I'm doing, so it simplifies the process of saying "Give me the value of X, and if that wasn't set yet, get it from Y and then cache it at X and return it."
Second, he new'ed up the fakeApiCaller
inline, which means he didn't need a spare line for that when we were only using it once.
Third, he passed the $cacheLengthInMinutes
directly to Cache::remember
. Using this or not depends on the familiarity of your team (and potential future devs) with Laravel. Just having a number there could be confusing, so if you anticipate trouble there, just extrat the 60 out as a variable like $cacheLengthInMinutes
.
Fourth, he used Laravel's array_only
syntax to easily only extract array items from $result['headers']
that matched my "desired pass-through" array. Again, if you have folks who aren't used to array_only
, you could extract the second parameter out to a $headersToPassThrough
variable.
So, you might be looking at this saying, "Matt, really? That's not that much different!" And you're right. But to me, taking advantage of little tools and helpers--for example, I've been absolutely in love with array_walk
lately--in your tiniest lines of code is a huge aspect of being a clear and concise programmer. If you're faithful in the little things...
I spend so much of my time at the top level that I miss out on a lot of opportunities for getting better as a line-by-line in-the-trenches programmer. I miss it, and I need it, which is (part of) why I have so many side projects. And little chances like this to up the quality and concision (but not cleverness!) of my code gives me great joy.
As an aside, my friend Zack Kitzmiller re-shared the Zen of Python, which I've read many times but feel like I should tattoo on my forearms.
]]>I went to Google PageSpeed results they linked to, and found that Gzip wasn't enabled (learn more about Gzip). So, here are the steps I took to turn it back on:
I chose to copy the settings recommended by HTML5Boilerplate--that's what I had been using on my former Apache server and they worked great. H5bp is a wonderfully-curated collection of wisdom that I'm happy to benefit from.
You can find them here: h5bp nginx config
SSH into your server. Have your sudo password ready.
I'll use vim
, but you can use whatever editor you prefer.
sudo vim /etc/nginx/nginx.conf
You can see that there's already a block of settings regarding Gzip; you could always just modify those and un-comment out the right lines. But since we're already prepared with our HTML5Boilerplate version, why don't we just wipe these lines:
##
# Gzip Settings
##
Gzip on;
Gzip_disable "msie6";
# Gzip_vary on;
# Gzip_proxied any;
# Gzip_comp_level 6;
# Gzip_buffers 16 8k;
# Gzip_http_version 1.1;
# Gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
Where the old Gzip settings were, paste your new settings. These are what they are at the time of writing this article:
# Compression
# Enable Gzip compressed.
gzip on;
# Enable compression both for HTTP/1.0 and HTTP/1.1.
gzip_http_version 1.1;
# Compression level (1-9).
# 5 is a perfect compromise between size and cpu usage, offering about
# 75% reduction for most ascii files (almost identical to level 9).
gzip_comp_level 5;
# Don't compress anything that's already small and unlikely to shrink much
# if at all (the default is 20 bytes, which is bad as that usually leads to
# larger files after gzipping).
gzip_min_length 256;
# Compress data even for clients that are connecting to us via proxies,
# identified by the "Via" header (required for CloudFront).
gzip_proxied any;
# Tell proxies to cache both the gzipped and regular version of a resource
# whenever the client's Accept-Encoding capabilities header varies;
# Avoids the issue where a non-gzip capable client (which is extremely rare
# today) would display gibberish if their proxy gave them the gzipped version.
gzip_vary on;
# Compress all output labeled with one of the following MIME-types.
gzip_types
application/atom+xml
application/javascript
application/json
application/rss+xml
application/vnd.ms-fontobject
application/x-font-ttf
application/x-web-app-manifest+json
application/xhtml+xml
application/xml
font/opentype
image/svg+xml
image/x-icon
text/css
text/plain
text/x-component;
# text/html is always compressed by HttpGzipModule
You can use the Forge nginx restart
dropdown, but since you're SSH'ed in you can also just run sudo service nginx restart
.
Type any URL into CheckGzipCompression.com. You can test both html pages (e.g. https://karaniapp.com/
) or individual assets like your JavaScript and CSS.
CheckGzipCompression.com seems to be inconsistent. Directions coming soon on how to do it yourself, and Chris Fidao suggests https://redbot.org.
That's it--you're now Gzip-compressing all of your basic text-based assets and few other freebies image types as well. Go forth and wow Google.
]]>When Taylor Otwell first released Laravel Forge, the work it took to spin up and manage multiple cloud VPSes and deploy sites to them was a big reason shared hosting had such a huge foothold. Forge took the process of creating a cloud VPS server, managing its environment, and deploying (and auto-Git-hook-deploying) sites to it and made it accessible and affordable.
But more complicated deploy needs--for example, managing complicated deploy scripts, or zero-downtime deploys--weren’t met by Forge. Whether or not you were a Forge user, you’d need to rely on a deploy system like Capistrano or Chef or Ansible to benefit from this level of power and flexibility, and the learning curve for these systems can be prohibitively high.
Today, Envoyer is launching to address those needs. Envoyer is a zero-down-time deployer for PHP & Laravel projects, which means it is a tool that you connect to your server to run your deploys, and which uses a series of tools to ensure that all of the preparation work each deploy needs in order to run--for example, composer install
--happens in the background while the previous version of the site is still online.
If you’ve ever worked with Capistrano, you’re already familiar with this. Note that this is a technical answer; if you don’t care how it works, just skip to Your First Project With Envoyer.
A deploy on a traditional system (e.g. Forge) means that there’s a single folder where your content lives. Let’s say you have two files, index.php
and the app
folder. You’d normally place them in the web root:
/webroot/index.php
/webroot/app
So, your deploy system would cd
into that directory, pull down your latest code (git pull
), and then run your deploy script, which likely includes composer install
. That means that your site is likely non-functional for a few seconds at least.
A zero-down-time deploy system maintains a system of releases
folders, each of which represents a single commit in your git
history. Every time the deploy system gets triggered for a build, it creates a new folder in releases
and clones the repo just at that commit into that folder. Then it runs your full deploy script in there. Only then, once it has a fully functional version of the site ready to go, it changes the *current folder to be a symlink to the latest folder under releases*. That’s it! The public folder at any time is just a symlink to one of several available folders in releases
.
And Envoyer manages all of that for you. You end up with this:
/webroot/current
/webroot/releases
/webroot/releases/20150316074832
/webroot/releases/20150315041251
Note that any folders that need to persist across releases--for example, the storage
folder in Laravel apps--need to exist at the webroot level, and need to be symlinked into each release. If you use Laravel, this is already managed for you.
OK, so let's get started with your first Envoyer project. Go over to Envoyer.io. Sign up. Choose “I want to manage my own projects.”
Here’s your dashboard:
Your first step is to add your project. Click the big Add Project button at the top right.
You’ll be prompted to choose which type of project it is, who your Git provider is, and which particular project it is.
Once you start your first project, you can click into the project’s dashboard.
Click the Servers tab on this dashboard and add a new server.
In order to add a new server, you’ll need to know its IP address, which Unix/SSH user you’re going to be connecting as, and what your project path is (likely something like /home/username/website.com
or /var/www/website.com
).
Once you add the server, Envoyer will give you an SSH key to add to that server so that Forge can authenticate.
Copy that key and add it to ~/.ssh/authorized_keys
on your server, or if you’re using Forge, add it to the SSH Keys tab for that server.
Now, go back to Envoyer and check your connection status by clicking the refresh icon next to your server in the Connection Status column.
If everything works, the icon will turn green and show the word “Successful.”
current/
to your server’s served/public directory for this siteNow, whatever your web root was previously for your web site (in your Nginx/Apache config), you need to prepend current/
to it. If you’re a Forge & Laravel user, you’re used to setting the Web Directory in Forge to public
. Now, that’ll be current/public
.
Go back to your project dashboard and click the red Deploy button to trigger a new deploy. Go to the deployments
tab and watch the deploy pop up.
Click on the arrow button on the deployment and you’ll be able to see each of the steps of your deploy script (at this point, it’s the default deploy script for whichever project type you selected):
Note that you can also drill down and view the specific terminal output once any of the steps of the deploy script is done:
That’s it for your first project! You now have a project up and running on Envoyer. One note: if you want it to automatically deploy your code every time you add a new commit to your Git branch, go to Project Settings / Source Control and check the box labeled “Deploy When Code Is Pushed”:
Envoyer has 4 steps in its deploy process, and you can run custom scripts before or after each of these steps. The steps are Clone New Release, Install Composer Dependencies, Activate New Release, and Purge Old Releases.
You can click the gear button next to any of the steps, and you’ll see this screen:
This will allow you to add hooks before or after this step. Click the Add Hook button on either side.
As you can see, each hook can customize its name, which user it runs as, and the body of the hook, which has access to the {{release}}
variable which represents the folder path for the latest release folder--e.g. /home/my-user/website.com/releases/20150316083301
.
Note that you can also click and drag any of hooks in each section to re-order them:
So, what if you push out a new deploy and you realize it broke everything? Normally you’d ssh into your server, git log
, find a commit point, copy the hash, git checkout THAT_HASH
, composer install
, and then maybe stop to see if your heart was still beating.
With Envoyer, your previous release is still a fully functional folder in the releases
directory. Just go to your project dashboard, find the latest functional deploy, and click the cloud “Redeploy” button. It’ll take the releases
folder for that deploy and symlink the current
directory to it. That’s it.
You may have noticed the Post-Deployment Health section of the Envoyer project dashboard. There’s an icon for New York, London, and Singapore, the three regions that Envoyer checks your site’s health from. Until you set up a Health Check URL, though, these icons will just be question marks.
Each project can have a Health Check URL, which Envoyer will call after every deploy. You can edit that in the Project Settings.
After every deploy, Envoyer looks at the HTTP response from that URL and makes sure that it’s 200
(the HTTP Status code for “OK”), and if so, it assumes your site is healthy. You could make this the home page of your site, or a special test URL that you handle in a way that best indicates the health of your site--whatever works best for you.
If everything’s healthy, you get all green checks:
And if anything breaks, you’ll get red x’es (and you’ll probably want to read the Rollbacks section above):
If you want to deploy to multiple servers, just add a second server to your project and it’ll automatically be pulled into the deploy process.
Note, however, when the deploy process runs, it waits for each step to finish on each server before the rest proceed, ensuring all of your servers stay in perfect sync.
If you have local configuration files that you want to sync across your deploys and servers (.env
if you’re using phpdotenv
or Laravel 5, .env.php
if you’re using Laravel 4), click the Manage Environment button in your project dashboard.
Note that this configuration file is hashed with a password that is never stored by Envoyer, so make sure not to lose your “Environment Key” (i.e. this password).
Once you enter your password, you’re presented with a text box that’s basically a code editor. Paste whatever you want in this file and it’ll be saved--if you chose Laravel 5 or Other as your project type, this file will be .env
, and if you chose Laravel 4
the file will be .env.php
.
Now choose the servers you want to save it to and save.
Envoyer will manage saving this file to your web root and symlinking it into each release directory.
If you visit the notifications tab for your project, you can set Envoyer up to notify you upon any major events. Right now Envoyer can notify you in either a Hipchat or a Slack chatroom.
One of the biggest difficulties I’ve had in managing servers’ health is ensuring that their cron jobs are always running. Envoyer’s Heartbeats allow you to set the expectation that a certain URL will be pinged at a certain frequency, which means you can have your cron job ping that URL at the end of each run. If Envoyer misses a ping, it’ll notify you.
To add a Heartbeat, go to the Heartbeats tab for your project. Click the **Add Heartbeat* button.
You can give it a label and then define the frequency with which you expect it to run.
Once you add it, it’ll start as Healthy. Until the period of time you chose has elapsed, it’ll stay Healthy. So if you chose “10 minutes”, it’ll stay Healthy for 10 minutes after you create it.
Each Heartbeat has a unique URL, which you can get from its listing on the Heartbeats tab. It’ll be something like this: https://beats.envoyer.io/heartbeat/203849102395790125
A regular cron file can just do this to ping the Heartbeat:
php yourfile.php && curl https://beats.envoyer.io/heartbeat/203849102395790125
If you use Laravel, the latest versions of Laravel have added a thenPing()
method to the scheduler that allows you to ping any URL after the cron job has run. This is perfect for Heartbeats:
And here’s the notification you’ll get if your Heartbeat is missed:
Envoyer allows for an unlimited number of people to access your servers. If someone on your team wants to collaborate but not set up their own Envoyer account, just have them sign up and choose “I’m just collaborating with others.”
Envoyer has a Collaborators tab on each project that allows you to give other people access to your project by inviting them via email.
Note that collaborators have access to everything in the project except “Delete project.”
storage
directory. But all of that is customizable..env
files.
But one that can't just live in .env
is the environment-dependent loading of service providers.
On a project we're working on, we want to register our error handlers in service providers, and we want to register a different error handler depending on the environment. We have two: ProductionErrorHandler
and VerboseErrorHandler
, the second of which is for development environments.
In case you're not familiar, defining normal (non-environment-specific) Service Providers happens in /config/app.php
. There's a providers
array there that looks a bit like this:
'providers' => [
/*
* Laravel Framework Service Providers...
*/
'Illuminate\Foundation\Providers\ArtisanServiceProvider',
'Illuminate\Auth\AuthServiceProvider',
'Illuminate\Bus\BusServiceProvider',
...
]
So, if your service provider should be loaded in every environment, just toss it into that array and you're good to go.
However, if you want to make it conditional, you'll need to head over to /app/Providers/AppServiceProvider.php
. This file is the general place you're going to want to be booting and registering anything that's not handled in another service provider, so this is a place you can go to conditionally register your service providers.
Here's what it looks like right now:
<?php namespace app\Providers;
use Illuminate\Support\ServiceProvider;
class AppServiceProvider extends ServiceProvider
{
/**
* Bootstrap any application services.
*
* @return void
*/
public function boot()
{
//
}
/**
* Register any application services.
*
* This service provider is a great spot to register your various container
* bindings with the application. As you can see, we are registering our
* "Registrar" implementation here. You can add your own bindings too!
*
* @return void
*/
public function register()
{
$this->app->bind(
'Illuminate\Contracts\Auth\Registrar',
'App\Services\Registrar'
);
}
}
So, let's do our switch.
// AppServiceProvider.php
public function register()
{
$this->app->bind(
'Illuminate\Contracts\Auth\Registrar',
'App\Services\Registrar'
);
if ($this->app->environment('production')) {
$this->app->register('App\Providers\ProductionErrorHandlerServiceProvider');
} else {
$this->app->register('App\Providers\VerboseErrorHandlerServiceProvider');
}
}
$this->app->register()
will set up the service provider just like adding it to config/app.php
will, so its register()
and boot()
methods will get called at the appropriate times.
You could also use switch
instead of if
, or you could do your work based on other environment variables, or whatever else--but this is your current best bet to conditionally load service providers. Hope this helps!
php artisan fresh
, but for a lot of quick-start, rapid development use cases, I've absolutely loved it.
However, what if you prefer managing your frontend dependencies with Bower? It's actually very simple to keep Laravel 5's default setup and just tweak it a bit to rely on Bower. Check it out:
bower.json
You need a bower.json
file to get started. Create a file in the root of your directory with the following content:
{
"name": "your-project"
}
.bowerrc
By default Bower installs into /bower_components
, but we want our bower dependencies in our resources/assets
folder. Create a .bowerrc
file in your project root that contains the following:
{
"directory": "resources/assets/bower"
}
Now, assuming you have bower installed (if not, $ npm install -g bower
), you can run the following commands to add jQuery and Bootstrap to your bower.json
, and install them:
$ bower install jquery —save
$ bower install bootstrap —save
Now, we need to update our scripts to pull in the new dependencies. In resources/less/app.less
, change this:
@import "bootstrap/bootstrap";
to this:
@import "../bower/bootstrap/less/bootstrap";
In gulpfile.js
add these lines (within the main elixir()
block):
mix.scripts([
'../assets/bower/jquery/dist/jquery.js',
'../assets/bower/bootstrap/dist/js/bootstrap.js'
], 'public/js/vendor.js');
In resources/views/app.blade.php
, replace these lines:
<script src="//cdnjs.cloudflare.com/ajax/libs/jquery/2.1.3/jquery.min.js"></script>
<script src="//cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.1/js/bootstrap.min.js"></script>
With this:
<script src="{{ asset('/js/vendor.js') }}"></script>
Delete the resources/assets/less/bootstrap
folder.
gulp
Now, just run gulp
from the command line. It'll combine the vendor dependencies into public/js/vendor.js
, it'll bring in the new versions of the bootstrap LESS into your stylesheet, and you're now ready to go.
You're ready! Your site should look exactly the same, but it's now backed by the power of Bower. (ba dum ching)
If you ever want to add new dependencies, you can just user bower and add javascript files to your gulpfile.js
mix.scripts
array, and add LESS or CSS to your app.less
imports. Done!
What if you want to import a theme from Bootstrap? Turns out that's wildly easy. Bring in your variables.less
from the theme, and just load it in app.less
after you import the Bower Bootstrap. It turns out LESS variables are overwritten if new versions of them are imported later (HT @marcorivm), so you just do this in app.less
:
@import "../bower/bootstrap/less/bootstrap";
@import "../bootswatch/variables";
@import "../bootswatch/bootswatch";
]]>There's a ton more you can do with CSV, so go check out the docs, but here's a simple recipe to get started:
$ composer require league/csv
Let's say you're in an export
method in your controller.
public function export()
{
$people = Person::all();
}
$csv = \League\Csv\Writer::createFromFileObject(new \SplTempFileObject);
You can manually pass an array of your headers, but this is the quick you can get it up and running in a prototype:
$csv->insertOne(array_keys($people[0]->getAttributes()));
You just inserted a row (insertOne()
) filled with an array of the column names from your people
table.
Thanks to Steve Barbera on Twitter for pointing out that the code I originally had here would fail in many settings.
foreach ($people as $person) {
$csv->insertOne($person->toArray());
}
$csv->output('people.csv');
That's it! ... sort of. If you use the CSV writer's output()
method, it'll write directly to the browser, which is fine in some contexts. But in Laravel, you're really better off creating a Laravel response object and setting your headers manually and then adding the CSV output to that response object, like this:
return response((string) $writer, 200, [
'Content-Type' => 'text/csv',
'Content-Transfer-Encoding' => 'binary',
'Content-Disposition' => 'attachment; filename="people.csv"',
]);
Now, that's really it! You're now dumping an entire Eloquent collection result straight to CSV. Let's check it out:
public function export()
{
$people = Person::all();
$csv = \League\Csv\Writer::createFromFileObject(new \SplTempFileObject);
$csv->insertOne(array_keys($people[0]->getAttributes()));
foreach ($people as $person) {
$csv->insertOne($person->toArray());
}
return response((string) $writer, 200, [
'Content-Type' => 'text/csv',
'Content-Transfer-Encoding' => 'binary',
'Content-Disposition' => 'attachment; filename="people.csv"',
]);
}
If someone visits that route, they'll get people.csv
downloaded straight into their browser. Done.
Again, check the docs to learn more about it. I hope this helps!
]]>Casting a value means changing it to (or ensuring it is already) a particular type. Some types you might be familiar with are integer
or boolean
.
Attribute casting is a feature of Eloquent models that allows you to set your model to automatically cast a particular attribute on your Eloquent model to a certain type.
Note: You could do this in the past, but you would have to automatically define a mutator for each attribute; now you can do it automatically with a single configuration array.
That means if you store your data in a particular format in the database, and you want it to return in a different format, you can now cast it to the new format.
The most common uses for this will be when you store numbers—they’re returned as strings by default, but Eloquent attribute casting allows you to cast them as integer
, real
, float
, or double
—or booleans—you can convert 0
and 1
in your database to true
and false
.
But that’s not all.
You cast attributes in Eloquent by adding a protected $casts
array to your model.
/**
* The attributes that should be casted to native types.
*
* @var array
*/
protected $casts = [
'is_admin' => 'boolean',
];
As you can see, each entry in the array has the property slug as the key, and the cast type as the value. This $casts
array is telling Eloquent: “Every time I access a property on this model named is_admin
, please return it cast to type boolean
.
integer
(or int
)This casts your field to an integer using return (int) $value
.
float
(or real
or double
)Real, Float, and Double are the same thing in PHP. PHP’s (double)
and (real)
type casting are just aliases to (float)
; and if you check out the source, Eloquent is literally running return (float) $value
for all three of these keys.
string
This casts your field to a string using return (string) $value
.
boolean
(or bool
)This casts your field to a boolean using return (bool) $value
, which means you’ll likely be storing your values as 0
and 1
.
object
Object and Array are the most interesting option. Both convert (deserialize) JSON-serialized arrays into PHP. Object uses return json_decode($value)
, returning a stdClass object.
array
Array deserializes JSON-serialized arrays into PHP arrays, using return json_decode($value, true)
, returning an array.
You can view the actual code for these in the source.
As you can see, Eloquent attribute casting has a ton of potential to free us up from unnecessary repetitive logic, and also sneakily makes it a lot easier to store data in JSON in our database. Good Stuff!
]]>But there are some higher-level architectural concerns regarding namespaces that a lot of folks have brought up to me recently, so I figured I'd get a little bit out on "paper".
I've seen a few primary ways of organizing namespaces. I'll discuss the pros and cons of each.
Command
that sends a Receipt
to a User
.App
just for brevity, but you can replace that with Vendor\Package
.Before we even get to talking about the specific ways we can namespace, let's talk about why we're doing it. I'm indebted to Shawn McCool (as always) for helping connect some of my vague thoughts here to actual computer science concepts.
As Shawn pointed out to me, the purpose of namespacing is cohesion: describing how connected some code is to other code. He pointed out that in other languages, namespaces are called "Packages" or "Modules"--and once you realize this, you understand that we're seeing sub-namespaces as little individual modules that should rely on other modules as little as possible (encapsulation). If modularity is one of the primary end goals of our namespacing, then that becomes one (of several) metrics we can use to judge a particular style of namespacing.
Of course, even this statement--that modularity is a primary end goal of namespacing--is under debate. But when I hear it, I like it.
OK, let's get down to it.
<?php namespace App;
class SendReceipt {}
src
Receipt
ReceiptRepository
SendReceipt
SendReceiptHandler
User
UserRepository
CreateUser
CreateUserHandler
I guess it's simpler to not have to deal with sub-namespaces? On a very small app, this might be fine. If you have five classes, who's saying you need to sub-namespace them at all? If this is a single package with a single purpose, or an application that has a single "module", it may not need anything other than a single global namespace.
The moment you get an application of any complexity, it's going to be hard to find your classes in the huge mush of your global namespace. If you have any separation of identity or purpose among your types of classes--for example, Users vs. Receipts--this global namespacing throws them together in a big pot. Not modular at all.
<?php namespace App\Commands;
class SendReceipt {}
src
Commands
SendReceipt
CreateUser
Entities
Receipt
User
Handlers
SendReceiptHandler
CreateUserHandler
Repositories
ReceiptRepository
UserRepository
When you want to hunt down a command, you know exactly where it lives. If your brain says "I need to edit one of my commands. Which one? The one that sends receipts", this is a good fit. This is one level more organized than ignoring namespaces, but not so deep that you'll be annoyed using it in a medium-sized site.
Additionally, your related classes (e.g. commands) can live next to each other; you can see any parallels there may be between SendReceipt
and SendReminder
, for example, and see how they all connect to each other.
This method also allows you to architect relationships between class types programatically. For example, a Command Bus might know that the Handler for a Command (which lives at App\Commands\{commandName}
) always lives at App\Handlers\{commandName}Handler
.
Doing it this way leaves you with classes in the same context spread across many different namespaces. For example, you might have App\Commands\SendReceipt
, App\Receipt
or App\Entities\Receipt
, App\Providers\ReceiptServiceProvider
, App\Handlers\Commands\SendReceiptHandler
, App\Repositories\ReceiptRepository
, and on and on. All of your Receipt logic, sprinkled all over the place.
If we're focusing on encapsulation and modularity, this grouping is not winning. Because we've spread all of the code about billing, for example, across our entire namespace landscape, the class organization isn't focusing on creating a billing module. Classes are next to each other purely because they happen to follow the same architectural pattern, not because they're actually related.
<?php namespace App\Billing;
class SendReceipt {}
src
Billing
Receipt
ReceiptRepository
SendReceipt
SendReceiptHandler
User
User
UserRepository
CreateUser
CreateUserHandler
If you're purely working in Billing right now, you know you'll have everything billing-related together in one spot. For your receipts, the entity, the command, the command handler, the repository, and so on--all together in one nice, neat bundle, easy to address as a single group.
This is where we start experiencing encapsulation and modularity. All of our Billing-related classes, regardless of their design pattern, are together in one place--which helps us group them mentally, even starting to think of them as a unit which may be able to live external to this application.
Your commands are now sprinkled across the code base. Your repositories, too. And your entities. And your command handlers.
<?php namespace App\Billing\Commands;
class SendReceipt {}
src
Billing
Entities
Receipt
Repositories
ReceiptRepository
Commands
SendReceipt
Handlers
SendReceiptHandler
User
Entities
User
Repositories
UserRepository
Commands
CreateUser
Handlers
CreateUserHandler
Separating it this way gives you the greatest level of namespace separation--that alone is a pro for some people. This is especially useful if you have a large codebase with a lot of classes--the more classes, the more you'll appreciate additional options for separation. Imagine adding an UpdateUser
command, a DeleteUser
command, a Subscription
entity and repository and associated handlers...
Just like Group by pattern, you can programatically relate classes.
And while your classes are separated by pattern, they're still grouped by the context, so you still do have all of your Receipt
code together in one place. We still get the benefit of modularity that we did in Group by context.
The longer the namespace definition is for your class, the more mental energy has to be spent understanding the entire namespace stack. There's more opportunity for typos and confusion. And with a small or medium-sized application, this may seem overkill.
Since you're grouping your classes by pattern at the lowest level, you don't get as much of the grouped-by-context benefit as the Group by context style.
OK, so that's a lot of abstract theory about it. But what about a concrete example? I've taken a few classes from SaveMyProposals as an example. Let's look at how we manage Talks, Conferences, and how we propose talks to conferences:
Global namespacing
app
Conference
ConferenceRepository
CreateConference
CreateConferenceHandler
CreateTalk
CreateTalkHandler
DeleteConference
DeleteConferenceHandler
DeleteTalk
DeleteTalkHandler
ProposeTalkToConference
ProposeTalkToConferenceHandler
RetractTalkProposal
RetractTalkProposalHandler
Talk
TalkRepository
UpdateConference
UpdateConferenceHandler
UpdateTalk
UpdateTalkHandler
Group by pattern
app
Commands
CreateConference
CreateTalk
DeleteConference
DeleteProposal
DeleteTalk
ProposeTalkToConference
RetractTalkProposal
UpdateConference
UpdateTalk
Entities
Conference
Proposal
Talk
Handlers
CreateConferenceHandler
CreateTalkHandler
CreateProposalHandler
DeleteConferenceHandler
DeleteProposalHandler
DeleteTalkHandler
ProposeTalkToConferenceHandler
RetractTalkProposalHandler
UpdateConferenceHandler
UpdateTalkHandler
Repositories
ConferenceRepository
TalkRepository
Group by context
app
Conferences
Conference
ConferenceRepository
CreateConference
CreateConferenceHandler
DeleteConference
DeleteConferenceHandler
UpdateConference
UpdateConferenceHandler
Talks
CreateTalk
CreateTalkHandler
DeleteTalk
DeleteTalkHandler
ProposeTalkToConference
ProposeTalkToConferenceHandler
Talk
TalkRepository
RetractTalkProposal
RetractTalkProposalHandler
UpdateTalk
UpdateTalkHandler
Group by context and pattern
app
Conferences
Commands
CreateConference
DeleteConference
UpdateConference
Entities
Conference
Handlers
CreateConferenceHandler
DeleteConferenceHandler
UpdateConferenceHandler
Repositories
ConferenceRepository
Talks
Commands
CreateTalk
DeleteTalk
ProposeTalkToConference
RetractTalkProposal
UpdateTalk
Entities
Talk
Handlers
CreateTalkHandler
DeleteTalkHandler
ProposeTalkToConferenceHandler
RetractTalkProposalHandler
UpdateTalkHandler
Repositories
TalkRepository
So, what's the answer?
It depends.
It's possible that the simpler organizational structures work better for applications with less classes & entities, whereas the larger organizational structures match better with the more robust organizational systems. But that's not a hard rule. I'm not even 100% sure it's a rule at all.
I think the modularity & encapsulation ideas are work giving your brain some time with. Think about how you would design it if each sub-namespace were to be removed from the others.
But in the end, I'd say just try them all out. Figure out what you like. Figure out what bugs you. Figure out what benefits you gain from each. You'll figure this out.
]]>In PHP prior to 5.3 (2009), any class you define lived at the same global level as other classes.
Class User
, class Contact
, class StripeBiller
--they're all together in the global namespace.
This may seem simple, but it makes organization tough, which is why PHP developers started using underscores to separate their class names. For example, if I were developing a package called "Cacher", I might name the class Mattstauffer_Cacher
so as to differentiate it from someone else's Cacher
--or Mattstauffer_Database_Cacher
, to differentiate it from an API cacher.
That worked decently, and there were even autoloading standards that separated out the underscores in class names for folders on the file system; for example, Mattstauffer_Database_Cacher
would be assumed to live in the file Mattstauffer/Database/Cacher.php
.
An autoloader is a piece of code that makes it so that, instead of having to
require
orinclude
all of the files that contain your class definitions, PHP knows where to find your class definitions based on a particular convention.
But it was pretty messy, and often ended up with class names like Zend_Db_Statement_Oracle_Exception
and worse. Thankfully, in PHP 5.3, real namespaces were introduced.
Namespaces are like a virtual directory structure for your classes.
So class Mattstauffer_Database_Cacher
could become class Cacher
in the Mattstauffer\Database
namespace:
<?php
class Mattstauffer_Database_Cacher {}
is now:
<?php namespace Mattstauffer\Database;
class Cacher {}
And we would refer to it elsewhere in the app as Mattstauffer\Database\Cacher
.
Let's take Karani--it's a CRM with a financial component, so it tracks donors and receipts, among many other things.
Let's set Karani
as our top-level namespace (sort of like the parent folder--usually named after your app or package). This might have some classes related to Contacts, and some related to Billing, so we're going to create a sub-namespace for each, Karani\Billing
and Karani\Contacts
.
Let's make a class or two in each:
<?php namespace Karani\Billing;
class Receipt {}
<?php namespace Karani\Billing;
class Subscription{}
<?php namespace Karani\Contacts;
class Donor {}
So, we're picturing a directory structure like this:
Karani
Billing
Receipt
Subscription
Contacts
Donor
So, if a Subscription can send a Receipt, it's easy to refer to it:
<?php namespace Karani\Billing;
class Subscription
{
public function sendReceipt()
{
$receipt = new Receipt;
}
}
Since Receipt
is in the same namespace as Subscription
, you can just refer to it like you would if you weren't using namespaces.
OK, but what if I want to reference a Receipt inside of a Donor?
<?php namespace Karani\Contacts;
class Donor
{
public function sendReceipt()
{
// This won't work!
$receipt = new Receipt;
}
}
You guessed it: This won't work.
We're in the Karani\Contacts
namespace, so when we wrote new Receipt
, PHP assumes we're talking about Karani\Contacts\Receipt
. But that class doesn't exist, and that's not what we're looking for.
So, you'll get a Class Karani\Contacts\Receipt not found
error.
You might be tempted to modify it to instead say $receipt = new Karani\Billing\Receipt
--but even that won't work. Since we're in the Karani\Contacts
namespace right now, it's seeing anything you write as being relative to the namespace you're in. So that would try to load a class named Karani\Contacts\Karani\Billing\Receipt
, which also clearly doesn't exist.
Use
blocks and Fully-Qualified Class NamesInstead, you have two options:
First, you can precede it with a slash to create its FQCN (Fully Qualified Class Name): $receipt = new \Karani\Billing\Receipt;
, which sends the signal to PHP to escape out of the current namespace before looking for this class.
If you precede the full namespace with a slash, creating the FQCN, you can refer to this class anywhere in your app without worrying about your current namespace.
Or, Second, you can use
the class at the top of the file, and then just reference it as Receipt
:
<?php namespace Karani\Contacts;
use Karani\Billing\Receipt;
class Donor
{
public function sendReceipt()
{
$receipt = new Receipt;
}
}
As you can tell, use
imports a class from a different namespace into this namespace so we can refer to it more easily. Once you've imported the class, any time you reference Receipt
in this class, it'll assume you're pointing to the imported class.
But, what if you also have a Receipt
class in your current namespace? What if your class needs access to both Karani\Contacts\Receipt
and Karani\Billing\Receipt
?
You can't just import the Karani\Billing\Receipt
class, or you won't be able to use both--they'd both have the same name in this class.
Instead, you'll need to alias it. You can change the use
statement to something like use Karani\Billing\Receipt as BillingReceipt;
. Now you've aliased the class, and then you can refer to the imported class as BillingReceipt
throughout your class.
You know the folder analogy I just used above?
It's easy to think about your classes that way, but there's actually not any inherent connection between your namespaces and your files' structure. Unless you use an autoloader, PHP doesn't have any idea where those classes actually live in your directory structure.
Thankfully, PSR-0 (now deprecated) and PSR-4 are autoloading standards that actually map your namespaces to real folders. So, if you're using PSR-0 or PSR-4--which is extremely likely if you're using Composer or any modern framework-- and a compatible autoloader, you can assume that the classes actually are in folders.
So, let's say I want the Karani
namespace to live in my src
folder.
Here's my folder structure for a generic, framework-independent project:
app
public
src
Billing
Contacts
vendor
As you can see, the src
folder represents the Karani
top level namespace. Since I'm using Composer as my autoloader, all I need to do to get my application to autoload my classes is teach Composer how to map namespaces to folders. Let's do that using PSR-4.
I'm going to open up composer.json
and add a PSR-4 autoload section:
{
"autoload": {
"psr-4": {
"Karani\\": "src/"
}
}
}
So you can see: the left side is the namespace that we're defining (note that you need to escape the slash separators here by doubling them), and the right side is the directory.
As you can see, there's a lot going on here, but it's really pretty simple: 98% of the time, you're going to be working with a PSR-4-structured, Composer-autoloaded, set of classes.
So 98% of the time, you can check your composer.json
, figure out where the root of the top level namespace lives, and assume you'll then have a one-to-one map of your namespace and the folders/files in that directory. Done.
And remember: next time you get Class SOMETHING not found
, you probably just need to remember to import it with a use
statement at the top of your file.
In Laravel 5 things have changed a bit. TL;DR take me to the solution already
Now, all custom error and exception handling has moved to app/Exceptions/Handler.php
. You’ll remember that that’s where we went to bring Whoops back.
You’ll notice, however, that it does this by default:
/**
* Render an exception into an HTTP response.
*
* @param \Illuminate\Http\Request $request
* @param \Exception $e
* @return \Illuminate\Http\Response
*/
public function render($request, Exception $e)
{
if ($this->isHttpException($e))
{
return $this->renderHttpException($e);
}
else
{
return parent::render($request, $e);
}
}
For all HTTP Exceptions (like 404s and 503s), it uses the renderHttpException()
method, which isn’t defined in this file. So, we check its parent, \Illuminate\Foundation\Exceptions\Handler
, where we can find the renderHttpException()
method:
/**
* Render the given HttpException.
*
* @param \Symfony\Component\HttpKernel\Exception\HttpException $e
* @return \Symfony\Component\HttpFoundation\Response
*/
protected function renderHttpException(HttpException $e)
{
if (view()->exists('errors.'.$e->getStatusCode()))
{
return response()->view('errors.'.$e->getStatusCode(), [], $e->getStatusCode());
}
else
{
return (new SymfonyDisplayer(config('app.debug')))->createResponse($e);
}
}
So, if the view exists for "errors.{httpStatusCode}", it'll display it (and pass along a little bit of information).
So, if we have a view file accessible at “errors.{errorStatusCode}”, it’ll automatically display for that status code.
So that means customizing your 404 error page is as simple as adding a view at resources/views/errors/404.blade.php
. Done!
The artisan commands for generating commands and events are a good start--they both create their own entity and (optionally) its handler. But you still can spend an hour writing the command and handler, and then waste another 15 minutes trying to figure out why it's not working, only to realize you never actually bound the two together.
Well, dear reader, your white-knuckled wait is finally over. In Laravel 5, you can bind (non-existent) events and handlers in the EventServiceProvider
, run php artisan event:generate
, and Artisan will automatically generate the files for you--both for the Event and its Handler.
Check out our events and handlers directories before:
app/
Events/
Event.php
Handlers/
Events/
1) Open app/providers/EventServiceProvider.php
. Find the $listen
property, which is where you would normally bind your events, and add one in the following format:
protected $listen = [
DidSomethingEvent::class => [
RespondOneWay::class,
RespondAnotherWay::class
]
];
2) Run php artisan event:generate
3) Profit.
Check it out.
app/
Events/
Event.php
DidSomethingEvent.php
Handlers/
Events/
RespondOneWay.php
RespondAnotherWay.php
Created. Bound. Ready to go. Even typehinted:
<?php namespace App\Handlers\Events;
...
class RespondOneWay {
...
public function handle(DidSomethingEvent $event)
{
}
}
Yah, that's it. You can now design your eventing system abstractly--you could plan the entire thing without writing a single command or handler. And once you're ready to go, generate all of your events and handlers in a single command.
]]>However, all of these instructions presume you're using the core Laravel Application (IOC Container) to extend the other classes. What if you want to extend the Application
itself?
This has come up recently because some folks are debating on whether or not Laravel 5 should make it easier to change the default folder paths--e.g. changing storage
's location, or changing public
to be public_html
. There are, at the time of this writing, no easy ways to do that other than extending Application, and that has some folks worried.
So, let's do it. Let's take a Laravel 5 application, extend its Application
, and change its storage path to be /OMGStorage
.
First, create an application class somewhere in your namespace, and have it extend Illuminate\Foundation\Application
. For example:
<?php namespace Confomo;
class Application extends \Illuminate\Foundation\Application
{
}
Now, let's find where Illuminate\Foundation\Application
is bound. Thankfully, it's simple: bootstrap/app.php
. The first non-comment code in the file is:
$app = new Illuminate\Foundation\Application(
realpath(__DIR__.'/../')
);
I think you can guess what's coming next. Just replace those lines with these:
$app = new Confomo\Application(
realpath(__DIR__.'/../')
);
That's it. We're now using our custom Application
everywhere through the site.
So, if our goal is to override the functionality in Application
that provides the location for the storage
directory, the final step is to find that functionality and override it.
Thankfully again, a quick glance through the Illuminate\Foundation\Application
class makes that very clear: there's a method named storagePath
:
/**
* Get the path to the storage directory.
*
* @return string
*/
public function storagePath()
{
return $this->basePath.'/storage';
}
... so, let's do our business. In our custom Application
, let's override that method:
<?php namespace Confomo;
class Application extends \Illuminate\Foundation\Application
{
/**
* Get the path to the storage directory.
*
* @return string
*/
public function storagePath()
{
return $this->basePath.'/OMGstorage';
}
}
... and done. We've now just customized this path. And, of course, we can use this same set of steps to extend anything else that the Application
class provides to Laravel.
That's it! I hope this gives you the freedom and power to take more control of your Laravel-based web sites, and also the encouragement to go look around the core even more to learn how everything works.
]]>If you haven't read it yet, go read the Laravel 5.0 - Commands & Handlers post. It'll give much-needed background for this article.
With Laravel 5's Commands (and their handlers), you can, in a simple, direct, and encapsulated way, emit a command to the system. DoThis. HandleACommandThatIsTellingMeToDoThis. It's imperative. It's telling the system what to do.
But sometimes, either as a result of a command, or just in another context, we want to send out a much more abstract notification. You've likely seen that Laravel 4 could trigger events based on a string event name:
$response = Event::fire('auth.login', array($user));
This is sending a notice out to the world of the application: "Hey! Someone logged in! Do whatever you want with this information" It's informative. If you're familiar with the concept of PubSub, that's what going on with events.
Well, in Laravel 5, the eventing system has been upgraded and it looks a lot more like the command system we saw in the last post. Rather than identifying an event based on a string (auth.login
), we're actually creating a PHP object and emitting that.
So, let's try it out.
php artisan make:event ThingWasDone
... and that generates this:
<?php namespace SaveMyProposals\Events;
use SaveMyProposals\Events\Event;
use Illuminate\Queue\SerializesModels;
class ThingWasDone extends Event {
use SerializesModels;
/**
* Create a new event instance.
*
* @return void
*/
public function __construct()
{
//
}
}
You can attach data to this object by adding constructor parameters and setting them as properties of the class.
You then create a handler:
php artisan handler:event SendMailInSomeParticularContext --event="ThingWasDone"
... which generates this:
<?php namespace SaveMyProposals\Handlers\Events;
use SaveMyProposals\Events\ThingWasDone;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Contracts\Queue\ShouldBeQueued;
class SendMailInSomeParticularContext {
/**
* Create the event handler.
*
* @return void
*/
public function __construct()
{
//
}
/**
* Handle the event.
*
* @param ThingWasDone $event
* @return void
*/
public function handle(ThingWasDone $event)
{
//
}
}
Note that the generator has already type-hinted a ThingWasDone $event
parameter in the handle
method. You can also use dependency injection, either in the constructor or in the handle
method, to bring in whatever other tools you need to get this event handled.
Note that just creating an event and its handler doesn't inform the bus that they should be paired together. You need to bind the listening relationship in app\Providers\EventServiceProvider
, on its $listen
property:
// app\Providers\EventServiceProvider
$listen = [
ThingWasDone::class => [
SendMailInSomeParticularContext::class,
SaveSomeRecordInSomeOtherContext::class,
DoSomethingElseInResponseToThingBeingDone::class
]
];
As you can see, you're using ::class
to get a string representation of this event's class name, and then you're adding listeners (using ::class
as well).
::fire
OK, so it's finally time to trigger the event. Note that these are just simple PHP classes--you could instantiate an Event manually, and instantiate its Handler, and pass the Event to the handler method. But the Laravel-provided bus makes it easier, more consistent, and more global:
\Event::fire(new ThingWasDone($param1, $param2));
That's it!
Just like with Commands, you can have your Event implement the Illuminate\Contracts\Queue\ShouldBeQueued
interface, and that'll make the handling of the event be pushed up on the queue; and you can add the Illuminate\Queue\InteractsWithQueue
trait to give easy access to methods for handling interactions with the queue like deleting the current job.
Note that, like in commands, if you want to attach an Eloquent model to an event, you should include the SerializesModels
trait at the top of the class definition for that event. At the time of this writing, it's actually already included by default.
That's it! Once you understand commands and handlers in Laravel 5, events are simple: the triggering system is informing the surrounding world that something happened, rather than demanding that the surrounding world do something. But they're both means of encapsulating the intent of a message, and they can play very nicely together, too.
]]>Update: If you're using Laravel 5.2, check out this Laracasts thread on Whoops in Laravel 5.2. I'll update this post soon with updated info, but for now just take a look there.
First, composer require filp/whoops:~1.0
.
Then open app/Exceptions/Handler.php
, and in the render()
method, add a Whoops handler in the else
condition. Maybe something like this:
/**
* Render an exception into an HTTP response.
*
* @param \Illuminate\Http\Request $request
* @param \Exception $e
* @return \Illuminate\Http\Response
*/
public function render($request, Exception $e)
{
if ($this->isHttpException($e))
{
return $this->renderHttpException($e);
}
if (config('app.debug'))
{
return $this->renderExceptionWithWhoops($e);
}
return parent::render($request, $e);
}
/**
* Render an exception using Whoops.
*
* @param \Exception $e
* @return \Illuminate\Http\Response
*/
protected function renderExceptionWithWhoops(Exception $e)
{
$whoops = new \Whoops\Run;
$whoops->pushHandler(new \Whoops\Handler\PrettyPageHandler());
return new \Illuminate\Http\Response(
$whoops->handleException($e),
$e->getStatusCode(),
$e->getHeaders()
);
}
That's it!
Thanks to this thread on the Laracasts forum for getting me moving in the right direction.
]]>composer create-project laravel/laravel my-project-name-here dev-develop --prefer-dist
. But what if you have a Laravel 4 app you want to upgrade?
You might think the answer is to upgrade the Composer dependencies and then manually make the changes. Quite a few folks have created walkthroughs for that process, and it's possible—but there are a lot of little pieces you need to catch, and Taylor has said publicly that he thinks the better process is actually to start from scratch and copy your code in. So, that's what we're going to be doing.
This process took me 3 hours the first time (because I was writing this article), and 1 hour the second time. SaveMyProposals isn't hugely complex, but hopefully this guide will keep your upgrade time low.
So, we’re working on upgrading SaveMyProposals, in my local ~/Sites/savemyproposals
directory.
We want to have a copy of the new site (a blank laravel 5 install) and the old site (the Laravel 4.2 savemyproposals from the github repo) next to each other, so we’ll do an additional clone of the savemyproposals repo into a parallel directory l5smp
. I’m going to do the Laravel 5 upgrade in my NEW directory, not in my previous working directory, so that it’s easier to make sure I don’t lose any git-ignored configuration files in the process.
cd ~/Sites
git clone git@github.com:mattstauffer/savemyproposals.git l5smp
cd l5smp
git checkout -b features/laravel-5
OK, so now let’s clean out our laravel-5
install. We want to delete everything; however, we can’t delete .git
, or we’d be deleting our git repo entirely. So, here’s the fastest way I came up with, but I’d love someone to chip in if there’s a cleaner solution:
cd ~/Sites/l5smp
rm -rf *
rm .gitattributes
rm .gitignore
Also, if you have any other files in your home directory left over after this—e.g. .scrutinizer.yml
—delete those too. We’ll be copying them over in a later step. You want nothing in your directory except .git
.
That’s it. We now have a clean install. Let’s get Laravel 5 in there! If you’re like me, you’ll want a clean point you can revert back to, so I actually committed here:
git commit -am "Delete everything in preparation for Laravel 5."
NOTE: If you commit this delete, you’ll be losing the continuity of the history of any files that you plan to bring back later. However, we can use git squash to merge this commit in later, which will bring that continuity back.
OK, let's get the Laravel files in there. Thankfully, Isern Palaus (@ipalaus) got me a very simple version of this step.
git remote add laravel https://github.com/laravel/laravel.git
git fetch laravel
git merge laravel/master --squash
git add .
git commit -am "Bring in Laravel 5 base."
composer install
You should be able to check to make sure this app works (without anything in it) by running the following:
php -S localhost:8090 -t public/
And visiting http://localhost:8090/
in your browser. If you see this, you’re doing good:
Now, to bring everything back. What I did was open the directories up in side-by-side panels in iTerm 2 and just start listing out the files in my old site (~/Sites/savemyproposals
) and moving them into the right places in the new site (~/Sites/l5smp
). Here are the steps; I’ll be referring to OLD and NEW, OLD being the directory for my laravel 4 code and NEW being the new blank Laravel 5 install.
If you already have a top-level PSR-0/PSR-4 namespace set up, like I did for SaveMyProposals, you’ll want to use the new app:name Artisan command.
php artisan app:name SaveMyProposals
If you follow the common community practice of having a folder either at the top level or in app
with the name of your top-level namespace—e.g. I have app/SaveMyProposals
that is my PSR-0 source.
The easiest way to map this is just to move all of the folders under this folder into your app
folder, and it'll just bring them into your already-established namespace. Done.
Work through your composer.json
in OLD, praying as you go that all of your packages have been updated for Laravel 5, and move your dependencies and other customizations into NEW’s composer.json
.
Now, try a composer update
to see what you get.
Move from app/commands
=> app/Console/Commands
Either namespace your commands, or add them to the composer.json
classmap autoloader.
Note that, in Laravel 5, the default Inspire Command comes with a handle()
method, but Laravel will call either fire()
(old style) or handle()
(new style), whichever it finds.
Move the bindings of your commands from start/artisan.php
into app/Console/Kernel.php
, where the $commands
array lists out all of the commands to register.
See Configuration below
Move from app/controllers
=> app/Http/Controllers
Either namespace your controllers (directions below) or drop the namespace from the abstract app/Http/Controllers/Controller.php
and autoload the app/Http/Controllers
directory via composer classmap.
Move from app/database
=> database
Delete the 2014_10_12_00000_create_users_table
, since you should already have this (although you should make sure that you have the remember_token
field, which was added in 4.1.26). You can keep the password_reset migration--that's new in Laravel 5.
Laravel 5 has moved to focusing on Middleware for things we used to use filters for, but you can still port your old filters over. Just open up app/Providers/RouteServiceProvider.php
and paste your bindings into boot()
. E.g.
// app/filters.php
Router::filter('shall-not-pass', function() {
return Redirect::to('shadow');
});
could be moved in like this:
// app/Providers/RouteServiceProvider@boot()
$router->filter('shall-not-pass', function() {
return \Redirect::to('shadow');
});
Note that you don't need to move over any of the filters that come in by default; they're all here, but now as Middleware.
Move from app/lang
=> resources/lang
Laravel 5—and most of the advice from the community for quite some time—has done away with the concept of a models
folder. But if your old app uses it, just create a models
directory within app
and classmap autoload it (by adding it to composer.json
’s classmap autoload section):
"autoload": {
"classmap": [
"database",
"app/models"
]
}
Note that the User.php
that comes with Laravel 5 lives in the app
directory, so you could also place your model files there and put them in your top level namespace (e.g. SaveMyProposals/Conference
for the Conference model).
If you use the SoftDeletingTrait
on any of your models, you'll want to rename the trait to SoftDeletes
.
Move app/routes.php
=> app/Http/routes.php
Adjust any routes that use the built-in routes from, for example, before
=> auth
to middleware
=> auth
.
artisan.php
See notes above about moving command bindings.
global.php
Global.php is a catchall for many people. Anything in here should likely be added to a Service Provider; but if not, you can register bindings in SaveMyProposals\Providers\AppServiceProvider
in the register()
method.
Move from app/tests
=> tests
Move from app/views
=> resources/views
If your controllers weren't namespaced in the old codebase, you can either bring them in with no namespace, or add namespaces to them.
If you want to add namespaces, just go to each controller and add SaveMyProposals\Http\Controllers
as their namespace.
If you want to go without, edit app/Providers/RouteServiceProvider.php
and set protected $namespace
equal to null
. Then add the controllers directory to classmap
in the composer.json
autoload section. Then edit the map()
method to be just this (replace the entire $this->loadRoutesFrom
line):
include app_path('Http/routes.php');
Note: If you namespace your controllers, all of your internal façade calls will fail; the simpler way is to choose to not namespace controllers. If you do, you'll see something like
Class 'SaveMyProposals\Http\Controllers\Auth' not found.
. If this happens, you just need touse Auth
,use Session
, etc. at the top of the controller—or just prepend\
to each inline (e.g. convertSession::flash('stuff', 'other stuff');
to\Session::flash('stuff', 'other stuff');
.)
Just like controllers, you can either namespace or tweak the setup. Namespacing Artisan commands works just like with controllers. You can tweak the set up to work with non-namespaces commands by changing the referred-to namespace in app/Console/Kernel.php
's $commands
property, and then classmap
autoloading the app/Console/Commands
directory in composer.json
.
If you’ve made any customizations to files in bootstrap
—and if you made any, it was likely only start.php
—you’ll want to move them over. Note that detectEnvironment
behaves differently in Laravel 5, so don’t even worry about copying over anything about environment detection. You’re going to be re-doing this.
You can delete every file out of NEW’s public
directory except index.php
, and move your OLD public
files in.
You’ll notice that the Laravel 5 app structure puts the source Less files in resources/assets/less
, so if you want to follow this convention you can put Sass or Less files and any other sources there. But you don’t have to, so for this walkthrough we won’t.
This is up to you. readme.md
, .scrutinizer.yml
, .travis.yml
, travis.php.ini
, package.json
, gulpfile.js
, whatever. Note that Laravel 5 ships with package.json
and gulpfile.js
by default, so you’ll want to check those out before you just overwrite them.
Also, be sure to bring in any customization you’ve made to .gitignore
, .gitattributes
, or phpunit.xml
.
I copied .env.local.php
from OLD to NEW. I then edited it to turn it from a PHP array into a .env format, from this:
<?php return [
'key' => 'value'
];
to this:
key=value
I also edited .env.example
to show what keys I expect in each .env
file:
key=valueHere
I also added APP_ENV
(set to "local"),APP_DEBUG
(set to true
), APP_KEY
(set to my encryption key), DB_HOST
& DB_DATABASE
& DB_USERNAME
& DB_PASSWORD
set to their appropriate values, and CACHE_DRIVER
and SESSION_DRIVER
set to 'file', as these are used internally in the framework.
Drop the concept of local
vs. production
vs. staging
config files. Drop the idea of .env.local.php
, .env.staging
, etc. Configuration file loading, and environment detection, is endlessly simpler.
Every piece of config that's consistent across all installs should live in the very-familiar config files in the config
directory.
Every piece of config that's specific to each install should live in .env
, which should be git ignored.
.env.example
should show all of the fields that should be present in each .env
file.
So, copy all of your OLD universal values from the config files into the NEW config
directory, and then extract changing values into .env
and .env.example
, and then use those inline your code using env('KEY_NAME_HERE')
.
The fastest trick is just to use the pre-existing User model, but if you can't do that, here's what you want to do:
Delete the following from your use
block:
use Illuminate\Auth\UserInterface;
use Illuminate\Auth\Reminders\RemindableInterface;
Add the following to your use
block:
use Illuminate\Auth\Authenticatable;
use Illuminate\Auth\Passwords\CanResetPassword;
use Illuminate\Contracts\Auth\Authenticatable as AuthenticatableContract;
use Illuminate\Contracts\Auth\CanResetPassword as CanResetPasswordContract;
Remove the UserInterface and RemindableInterface interfaces
If you used them, remove Illuminate\Auth\Reminders\RemindableTrait
and Illuminate\Auth\UserTrait
from your use block and your class declaration.
Mark it as implementing the following interfaces:
implements AuthenticatableContract, CanResetPasswordContract
Include the following within the class declaration, to use them as traits:
use Authenticatable, CanResetPassword;
And finally, either change the namespace of your User
model to your app namespace, or change the 'model' property in config/auth.php
to the correct namespace (e.g. User
instead of SaveMyProposals\User
).
Finally, if you're using your own User model, you can delete app/user.php
.
If you're using Form or Html helpers, you'll see an error stating class 'Form' not found
(or the same for Html). Just go to Composer and require "illuminate/html": "~5.0"
.
You'll also need to get the Façade and the service provider working. Edit config/app.php
, and add this line to the 'providers' array:
'Illuminate\Html\HtmlServiceProvider',
And add these lines to the 'aliases' array:
'Form' => 'Illuminate\Html\FormFacade',
'Html' => 'Illuminate\Html\HtmlFacade',
The best way to handle the change from {{
to {!!
for raw HTML output in Blade is to just use find and replace any time you KNOW you have to have raw output—for example, if you're using Laravel form helpers—and replace {{
with {!!
and }}
with !!}
in those contexts. Everywhere else, just leave it as {{
and }}
; that should be the default echo syntax from now on.
If for some reason you need to use the old Blade syntax, you can define that. Just add the following lines at the bottom of AppServiceProvider@register()
:
\Blade::setRawTags('{{', '}}');
\Blade::setContentTags('{{{', '}}}');
\Blade::setEscapedContentTags('{{{', '}}}');
Note that if you change the raw tags this way, your comments with {{--
will no longer work.
If you did the commits along the way like I did, you can squash them together to get continuity with git squash. Run git log
to see how many commits you used; I used 3. Then run git rebase -i HEAD~3
(replace 3 with your number.)
This will open Git, and you can now squash the commits. If you're unfamiliar with git squash, check out my tutorial on Squashing Git Commits.
Because everything's namespaced, all of your controllers' View::make()
(and any other façade accessed with your namespaced controllers) will break because it can't the top-level namespaced View
, Auth
, etc. Probably the simplest solution is to use View
at the top of the file, although there are quite a few more architecturally "pure" ways.
If you miss the Whoops error handler, I have a post on how you can bring it back.
Lots has changed in Laravel 5 with how packages work, so it's likely there will be a lot of wrinkles to be ironed out there. If you run into particular issues there, please leave notes in the comments so I can get this section more comprehensive. For now, Ryan Tablada warns: "Be prepared for function "package" does not exist
".
There are probably plenty of packagers that won't make it. Most Laravel-specific packages won't. In this codebase, bugsnag-laravel was the only such package.
This was a quick run-through. I'm confident that I'm missing some pieces here, because I only picked up what happened for this particular site's upgrade. So, in a move counter to my usual policy, I'm going to open up comments on a Github Gist where folks can provide corrections/updates/etc.
That's it! As you can see, there are a lot of pieces, but this is actually a very simple and quick upgrade, considering that we're upgrading major versions of a framework here. Go Forth and Upgrade!
If you see this error:
PHP Fatal error: Class 'Eloquent' not found in /path/to/YourModel.php
... that means YourModel
is extending \Eloquent
. To make this work, just add this use
to that model's use block:
use Illuminate\Database\Eloquent\Model as Eloquent;
If you see this error:
Catchable fatal error: Argument 1 passed to Illuminate\Foundation\Application::__construct() must be an instance of Illuminate\Http\Request, string given
... you need to run composer install
.
If you see the error:
Call to a member function domain() on a non-object
It means one of your route actions isn't linking correctly. For example, if you're linking to a route named "signup" and you don't have a route with that name, you'll get this error.
More likely in a Laravel 5 upgrade, it has to do with the namespacing or non-namepsacing of your controllers.
If you start seeing Illuminate\Session\TokenMismatchException
show up--likely in your logs--this is because, by default, Laravel 5 has CSRF protection enabled on all routes. You can remove the CSRF protection Middleware from the $middleware
stack in App\Http\Kernel
and move it to the $routeMiddleware
stack as an optional key, or you can adjust all of your forms--even those in AJAX--to ensure they all use CSRF.
The metric for whether or not the scheduling process has been successful is: Can I, ahead of time, trigger a new episode of the Five Minute Geek Show to publish at a certain time? This is important to me, because I have FMGS on a set schedule, but I'm not always at my main computer to deploy then.
Normally, publishing to the site means just running the publish script locally. My particular publish script generates the new files, moves them into a Git repository, and pushes up the git repo, relying on Forge to catch the Github webhook and deploy.
Because we're scheduling it for the future, we can no longer rely on my local machine being running at the time of the deploy. So we need to find some other way to sync it up.
When I first mentioned this on Twitter, Adam Wathan pointed out the at
command, which is like a one-off cron job that lets you schedule a command later. I like it, and I'll be planning on that.
So, my command was this:
$ at 10:00
at> cd ~/fiveminutegeekshow.com
at> do whatever commands I want here
at> <ctrl-d>
You can check which commands are scheduled later with $ at -l
.
The first option is to get rid of the idea of a "distribution" github repo entirely. We could upload the "source" repo up to the production server, along with its output_prod
folder. Then we could schedule a sculpin generate --env=prod
command at the given time.
We can just point Forge's web root to be /home/fiveminutegeekshow.com/output_prod
, and then every time a new build is made, Forge will be serving the new content.
We could keep the situation just like it is right now, disable Forge's auto-deploy, and instead schedule a git pull
in the proper directory at the given time.
Capistrano is a deploy system that has a clever idea: To avoid downtime, every new deploy of the site should have its own folder in a "deploys" directory. Then when it's time to actually publish the new version, all you need to do is symlink your "current" directory (which is what your web server is serving from) over to the new directory. This also makes it very easy to roll back to previous verisons.
We could do the same sort of thing. This is where it starts getting complex, though. We're talking about likely using something like Capistrano or Chef to deploy the code, and then still needing to schedule the symlink in advance. I know someone's going to suggest this way of doing it, which is why I'm bringing it up, but it seems crazy to me.
For now I'm going to try Option 2. I have an episode going out at 10am Eastern today, so I'm going to see if I can get it set up for this afternoon, and report back here.
UPDATE: Option 2 worked like a charm!
]]>The new feature set is all around Commands, which already exist in Laravel, but are getting a lot of new love in Laravel 5.0.
I’ll be using examples in this blog post from a new application I’m working on called SaveMyProposals, which allows conference speakers to save talk proposals.
You can learn about the concept of a command, a command handler, and a command bus in more depth from Shawn McCool, but essentially:
A command is a simple object that’s meant to be a message. It contains only the information you need in order to do something. Our example here will be “Duplicate Talk Command”, which is an imaginary command that our system (a controller or an Artisan command, likely) will dispatch any time a user has chosen to duplicate a talk proposal. The duplicate talk command will have all of the properties set on it that we need to duplicate a talk—likely either a serialized Talk object or a TalkId.
A command handler is a class tasked with doing something in response to the command. A command can be passed through one or many handlers; each pull out important information from the command and do something in response.
A command bus is the system that allows you to dispatch (create and send off) commands, that matches commands to their handlers, and that makes everything play together. Often folks write their own command busses, but Laravel is providing one out of the box so we don't need to worry about this in this article.
Before we get into the entire structure of how to use commands in Laravel 5, let’s look at what the end use case will look like. Imagine a user visits a route something like savemyproposals.com/talks/12345/duplicate
, which routes them to TalkController@duplicate(12345)
.
We'll have a controller method to handle it:
// Http\Controllers\TalkController
...
public function duplicate($talkId)
{
$talk = Talk::findOrFail($talkId);
$this->dispatch(new DuplicateTalkCommand($talk));
// Depending on implementation, this could also just be:
// $this->dispatch(new DuplicateTalkCommand($talkId));
}
Then we'll have a command:
// Commands\DuplicateTalkCommand
...
class DuplicateTalkCommand extends Command
{
public $talk;
public function __construct(Talk $talk)
{
$this->talk = $talk;
}
}
And a command handler:
// Handlers\Commands\DuplicateTalkCommandHandler
...
class DuplicateTalkCommandHandler
{
public function handle(DuplicateTalkCommand $command)
{
// Do something with $command
dd($command);
}
}
As you can see, our controller creates a DuplicateTalkCommand
with the necessary information, dispatches it using the built-in command bus dispatcher, and then it’s handled (automatically) by its handler.
OK, so let’s look first at where those commands and handlers live, and then how we generate them.
There are two new folders in app/
: Commands
and Handlers
, and Handlers
has two subfolders: Commands
and Events
(which shows us we can look forward to Event handling, too.)
app/
Commands/
Handlers/
Commands/
Events/
As you can guess, Commands go in the app/Commands
folder, and Command Handlers go in the app/Handlers/Commands/
folder—with the exact same name as their Command, but with Handler
appended to the end.
Thankfully, you don’t have to do this on your own. There’s a new Artisan generator that’ll make it simple to create your own command:
$ php artisan make:command DuplicateTalkCommand
By default, this creates a self-handling command that isn't pushed to the queue. Pass this the --handler
flag to generate a handler, and the --queued
flag to make it queued.
This generates two files: a Command (app\Commands\DuplicateTalkCommand.php
) and a Handler (app\Handlers\Commands\DuplicateTalkCommandHandler.php
) (if you passed the --handler
flag), and the Handler’s handle
method is generated automatically typehinted for its paired Command.
So, in order to create a new DuplicateTalkCommand
, you'd do the following:
php artisan make:command DuplicateTalkCommand
DuplicateTalkCommand
and give it a public property of $talk
and set it to be injected via the constructorDuplicateTalkCommandHandler
and write its handle()
method to do whatever you actually want to have happen--likely using a repository or other database access layer to duplicate the talk and save the duplicate.That's it! You're now using commands in Laravel 5.0! Everything from here on out are just nitty gritty details about queues, traits, interfaces, and other special considerations and tricks.
If you want any command to be queued every time you dispatch it (instead of operating synchronously), all you need to do is have it implement the ShouldBeQueued
interface. Laravel will read that as a signal to queue it, and it’ll be pushed onto whichever queue you’re using instead of running it inline.
...
class DuplicateTalkCommand extends Command implements ShouldBeQueued
{
This means it’s now even easier than ever to integrate queues into your normal workflow.
Adding this trait to your command will give you all of the features on your command that you’re used to in traditional queue commands: $command->release()
, $command->delete()
, $command->attempts()
, etc.
...
class DuplicateTalkCommand extends Command implements ShouldBeQueued, InteractsWithQueue
{
If you pass an Eloquent model in as a property, like I did in the example above, and you want to queue your commands (instead of just letting them run synchronously), it might cause you some trouble because of how Eloquent models serialize. But there’s a trait you can add to the command named SerializesModels
that will smooth out any of those problems. Just use it at the top of your command:
...
class DuplicateTalkCommand extends Command implements ShouldBeQueued
{
use SerializesModels;
You’ll notice that, in the example above, we were able to just use $this->dispatch()
in the controller. This is controller magic, but it’s magic that’s accessible via a DispatchesCommands
trait, which you can apply to anything other than a controller.
So, if you want a service class, for example, to be able to use $this->dispatch()
in its methods, just use the DispatchesCommands
trait on your service class and you’re good to go.
If you’d rather be more direct and clear with your use of the bus, instead of using the trait, you can actually inject the bus into your constructor or method. Just inject Illuminate\Contracts\Bus\Dispatcher
and you’ll have a bus ready to dispatch from.
...
public function __construct(\Illuminate\Contracts\Bus\Dispatcher $bus)
{
$this->bus = $bus;
}
public function doSomething()
{
$this->bus->dispatch(new Command);
}
We’ve already seen that $bus->dispatch(new Command(params...))
is the simplest way to dispatch a command. But sometimes the parameter list for a new command can get larger and larger—for example, when your command is handling a Form Request.
...
class CreateTalkCommand extends Command
{
public function __construct($title, $description, $outline, $organizer_notes, $length, $type, $level)
{
Keeping up the instantiation call for this could get crazy.
$this->dispatch(new CreateTalkCommand($input['title'], $input['description'], $input['outline'], $input['organizer_notes'], $input['length'], $input['type'], $input['level']));
Hm, take a look at that. Often we’re just passing in properties with the same key, accessed from an array or a Request object, right? Thankfully, there’s a workaround to make that very easy:
$this->dispatchFrom(‘NameOfCommand’, $objectThatImplementsPHPArrayAccessible);
That’s it! So you could do this:
$this->dispatchFrom(CreateTalkCommand::class, $input);
... or even this:
public function doSomethingInController(Request $request)
{
$this->dispatchFrom(CreateTalkCommand::class, $request);
Laravel will auto-map the keys on that array or arrayAccessible
object to the same property names in your command constructor.
If you’d rather avoid the hassle of a Command and a CommandHandler, you can make a Command “self-handling”, which just means that there’s only a single handler for it, and that handler is the command itself. Just add a handle()
method on that command, and have the command implement the SelfHandling
interface:
...
class DuplicateTalkCommand extends Command implements SelfHandling
{
...
public function handle()
{
// Do stuff with $this->talk
}
handle()
method.Illuminate\Contracts\Bus
or Illuminate\Contracts\Queue
namespaces. E.g. Illuminate\Contracts\Bus\SelfHandling
.$command->delete()
at the end of your handler. As long as your handler doesn’t throw any exceptions, Laravel will assume it completed properly and will delete the item off the queue.That was a lot. If I missed anything or wasn’t particularly clear, please let me know—there’s a lot to cover in here, and I’m on vacation so I’m doing it in fits and spurts. But I hope this gives you a good idea of how it’s all going to work—and like I said, Taylor’s video on Laracasts covers all this and more, and there’s plenty more to come.
]]>If you haven’t done this, check out my post on Getting Your First Site Up and Running in Laravel Forge.
Create a new site with your appropriate domain—for example, craft.mattstauffer.co
. Keep the web directory to public
—this is the directorys your new site will serve its files from.
Once it's done installing, click the little pen icon under "Manage".
Note: Forge just removed their auto-installers as of 2015-07-09. I'll try to update this guide as soon as possible to make this still work.
When you spin up a new site on Forge, “Craft CMS” is one of the big options available to you when you configure your new site. Just choose that.
Pick a database name, and paste in the database password you got in an email from Forge when you first set up this server.
Click the “Finish installation” button. At the time of this writing, it points to http://your-servers-ip-address/admin/install, which won’t work unless this is your only site on this server, so if you see a broken page, just navigate to http://your-craft-domain.com/admin/install (e.g. http://craft.mattstauffer.com/admin/install).
Now just walk your way through the installation process, and you’ll be ready to go!
If you’ve never used Craft before, it’s a really powerful content management system based on channels of content. Imagine if Wordpress were originally designed to be a CMS, instead of being designed as a blogging platform, and imagine the codebase were on top of a modern framework (Yii) instead of legacy procedural code. That’s Craft. (If you've ever used ExpressionEngine, it's like that, minus the drama and the CodeIgniter, run by one of the best plugin devs from the EE community)
Craft has a great web site, StackExchange, community site, and the documentation is improving every day. To learn a little bit about how great Craft is, check out the Features section.
If you’ve never used Forge before, it’s a system that’s built to make administering custom VPSes like those you can get from Linode and DigitalOcean simpler and more consistent. You can check out all of my blog posts on Forge to learn a little more about how to use it and the options it provides. Forge also has a customer support site with some basic FAQs.
It's called "Laravel Forge" only because it's run by the guy beyond Laravel, Taylor Otwell. But it works fine for non-Laravel projects.
That’s it! You’re now up and running on a custom VPS with a powerful CMS. Enjoy!
]]>Sculpin itself has pretty great documentation, but I still wanted to provide a run-down of the steps I took to get the Five Minute Geek Show's site running in Sculpin and hosted on Forge.
It's best to check out the Sculpin Quick Start to get up and running, but in short, here are the commands I ran:
$ curl -O https://download.sculpin.io/sculpin.phar
$ chmod +x sculpin.phar
$ mv sculpin.phar ~/usr/local/bin/sculpin
$ cd ~/Sites
$ git clone https://github.com/sculpin/sculpin-blog-skeleton.git fiveMinuteGeekShowBlog
$ cd fiveMinuteGeekShowBlog
$ sculpin install
$ sculpin generate --watch --server
$ cd source/_posts
# Edit whichever files
# Preview in browser at http://localhost:8000/
$ sculpin generate --env=prod
$ rsync -avze 'ssh -p 999' output_prod/ user@example.com:public_html
If you're new to Sculpin or static site generators in general, please read on. If you're not, please skip to the "Sculpin on Forge" section.
Here are the steps listed above, broken down:
$ curl -O https://download.sculpin.io/sculpin.phar
$ chmod +x sculpin.phar
$ mv sculpin.phar ~/usr/local/bin/sculpin
Our first lines are downloading the sculpin.phar
executable file to our computer, marking it as executable, and then moving it into our bin
directory so it's in our PATH
and will be runnable from anywhere on the system (via the Terminal.) It's now installed "globally," meaning it's not just connected to a particular folder or project.
This step has been run successfully if you can spin up a new Terminal window, navigate anywhere on your computer, and run sculpin
and get a positive response. If not, the issue is likely with your PATH
.
$ cd ~/Sites
$ git clone https://github.com/sculpin/sculpin-blog-skeleton.git fiveMinuteGeekShowBlog
$ cd fiveMinuteGeekShowBlog
Sculpin can be used for a lot more than blogs, but the sculpin-blog-skeleton
makes it really easy to get up and running with a Sculpin-based blog. So, we're now cloning a copy of the of the blog skeleton repository into where we keep our sites (I use ~/Sites
).
$ sculpin install
sculpin install
is basically a wrapper around composer install
, so we're just installing our dependencies locally to this project.
$ sculpin generate --watch --server
sculpin generate
scans the source
directory and generates static files in output_dev
or, if the --env ENVNAMEHERE
flag is used, the output_ENVNAMEHERE
folder.
NOTE: Usually you don't ever pass a
--env
flag. Sculpin defaults to --env=dev, and it automatically serves its previews from theoutput_dev
folder, so you don't have to worry about that. And for prod, usually your publish script will have--env=prod
andoutput_prod
baked into them, so you won't have to think about this at all unless you're writing a publish script.
Adding the --watch
flag sets it as a long-running script (like Grunt or Gulp) that watches the filesystem and auto-generates on any changes.
Adding the --server
flag spins up a server at http://localhost:8000/ (you can specify the port with --port=8090
) for you to check your changes at.
Now you can edit, add, or delete files from anywhere in the source
directory. Blog posts, in this skeleton, go in source/_posts
. Check out app/config/sculpin_kernel.yml
to set the URL structure for your content types, or app/config/sculpin_site.yml
to change site-wide variables.
For a great example (it's the one I used) of how to modify that skeleton for Podcast web sites, check out Adam Wathan's Full-Stack Radio repository.
$ rsync -avze 'ssh -p 999' output_prod/ user@example.com:public_html
The default prescribed method to move your files from your local site to your remote server is not git; you're saving your source in git, but you're not actually hooking up the production web site to your git repo. Rather, you're using this last line to copy the files from the output_prod
folder of your local install up to the public_html
(or whatever) folder on your production server.
rsync -avze 'ssh -p 999'
set the basic context, flags, and permissions for this rsync session.
output_prod/
shows which directory to copy from.
user@example.com
is your username and domain for your remote server.
public_html
is the remote directly it should be uploading to.
If you've ever used Forge before, you know it's absurdly easy to spin up a server and hook it up to a Github repo.
So, I created a new Github repo--this is not the only way to do it, but it is definitely one option--named FiveMinuteGeekShowPublic--and made it empty. I then cloned it to my ~/Sites
directory.
I spun up a new site on Forge on a Linode box for http://fiveminutegeekshow.com/
, told it to pull its data from this new repo, and then set it to "Quick Deploy" (meaning, every time I push to the master branch, it runs a certain script).
I edited the Quick Deploy script to be appropriate for this site:
cd /home/forge/fiveminutegeekshow.com
git pull origin master
Then I was ready to go. I set up a publish script in my FiveMinuteGeekShow
repo that generates production HTML, copies the files from Sculpin's output_prod
folder over to my local FiveMinuteGeekShowPublic
folder, and then git commits and pushes.
I ran chmod +x publish.sh
, and now I publish my blog by navigating to the fiveMinuteGeekShow
directory and typing ./publish.sh
.
As you can tell, my publish script relies on a certain directory structure on my local machine, which is quick and easy but also a bit hacky.
You can see in Sculpin's default documentation that you can also just use rsync
to copy the files directly up to your server. Check out the notes above of how your rsync command is actually structured.
You could do this with Forge. Just don't sync a Github repo, and use the credentials you were emailed when you first spun up this Forge server to set up the rsync to copy your output_prod
folder up to the remote server. Same deal, and it doesn't rely on Git.
I have to thank Adam Wathan for this one. I tried to get a custom 404 page by setting error_page 404 /404/index.html
in my Forge nginx configuration and discovered that it wouldn't work.
Adam helped me realize that Forge's default references to all of the .php
files would override it ever hitting the 404 page, so he helped me clean up my nginx config. It ended up looking like this:
server {
listen 80;
server_name fiveminutegeekshow.com;
root /home/forge/fiveminutegeekshow.com;
error_page 404 /404/index.html;
# FORGE SSL (DO NOT REMOVE!)
# ssl_certificate;
# ssl_certificate_key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
index index.html index.htm;
charset utf-8;
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
access_log off;
error_log /var/log/nginx/fiveminutegeekshow.com-error.log error;
location ~ /\.ht {
deny all;
}
}
Now, any time someone visits a bad route, they'll get the 404 page.
As you can see, wrapping your head around Sculpin--and static site generators in general--can take a minute, but once you get it, it's actually a very simple process, and hosting it on Forge is surprisingly simple.
Using Github pages or Heroku are also great options, but if you're already in the Forge space--or if you want to eventually add other functionality to the site other than just the static files being generated by Sculpin--Sculpin + Forge is a great combination.
]]>On October 20th, I had the pleasure to join two of my good friends, Ian Landsman & Andrey Butov, on their excellent Bootstrapped podcast. We talked about running a SaaS and a consultancy at the same time, why I'm so happy, Laravel, and a ton more.
Bootstrapped, Episode 51, "Special Guest: Matt Stauffer"
I was honored to be the first guest on my friend Adam Wathan's Full Stack Radio podcast. We talked CSS, OOCSS, BEM, SMACSS, preprocessors, CSS architecture, and more.
Episode 1: CSS Semantics with Matt Stauffer
I started recording 5-minute YouTube videos talking about what I'm thinking about in dev, and it's turned into a podcast/videolog series. There's now 5 minutes of content in each, with intros and outros, so it turns out a bit longer than 5 minutes total.
It's been a ton of fun, and I hope I continue to have the opportunity to share and teach and goof around.
]]>Laravel 5.0 is introducing a pretty incredible cron-style scheduler (similar to Indatus' Dispatcher) built into the core. Just point a cron job that's scheduled once a minute at artisan schedule:run
and you're good to go.
* * * * * php /path/to/artisan schedule:run 1>> /dev/null 2>&1
For example, you can now bind the following event to clear your auth reminders daily:
$schedule
->command('auth:clear-reminders')
->daily()
->sendOutputTo($logPath)
->emailOutputTo('me@me.com');
You can use commmand()
to call artisan commands, call
to call methods or functions, or terminal()
to run command line scripts:
$schedule->call('YourClass@someMethod')->twiceDaily();
$schedule->call(function() {
// Do stuff
})->everyFiveMinutes();
$schedule->terminal('cp oldThing newThing')->dailyAt('8:00');
You can also use a callback to decide when something will or won't run, using when()
or skip()
:
$schedule
->call('Mailer@BusinessDayMailer')
->weekdays()
->skip(function(TypeHintedDeciderClass $decider)
{
return $decider->isHoliday();
}
);
This is just a quick introduction, though; check out Eric's post for a fuller rundown: Laravel 5 Scheduler on Laravel-News
]]>Homestead.yaml
by default. There was no prescribed place to install it, no global commands for accessing the box, and any time you actually customized your Homestead.yaml
file you instantly dirtied your Homestead Github clone, making upgrading difficult.
You can guess where I’m going with this. All of these things are problems no more. The latest version of the Homestead ecosystem has just been released, and it’s moved Homestead into a globally installable Composer package which copies Homestead.yaml
(and any other user-editable files) into ~/.homestead
on your machine. This way there’s a clear future upgrade path; you get instant global access; and it’s clear where your config file should live.
As always, the Laravel docs are the best place to go for the fullest installation instructions. But here’s the one section that’s most different from the past, copied directly from the docs:
Installing Homestead
Once the box has been added to your Vagrant installation, you are ready to install the Homestead CLI tool using the Composer global command:
composer global require "laravel/homestead=~2.0"
Make sure to place the
~/.composer/vendor/bin
directory in your PATH so thehomestead
executable is found when you run thehomestead
command in your terminal.Once you have installed the Homestead CLI tool, run the init command to create the
Homestead.yaml
configuration file:homestead init
The
Homestead.yaml
file will be placed in the~/.homestead
directory. If you're using a Mac or Linux system, you may editHomestead.yaml
file by running the homestead edit command in your terminal:homestead edit
When you look at the list of available commands, it’ll look a lot like what you had available through vagrant:
What’s great now is that you can run any of these from anywhere on your machine by simply typing homestead COMMAND
.
You’ll notice a few unique commands, however, that aren’t just maps to the same Vagrant command:
Like you read in the docs, Homestead edit (on Mac and Linux machines) will automatically open up your Homestead.yaml
file in your system’s default editor.
Homestead init creates the ~/.homestead
directory and places a skeleton new Homestead.yaml
file in it, as well as after.sh
and aliases
, two additional files that allow you to customize your provisioning.
Homestead update runs vagrant box update
, so you can update the Homestead machine image—for example, when Taylor adds PHP 5.7, this will be the way to upgrade your Homestead image.
In addition to folders
and sites
there’s now a databases
option in Homestead.yaml
that allows you to specify databases for the vagrant box to create when it provisions.
When you get into your new ~/.homestead
directory, you’ll see two familiar files—Homestead.yaml
and aliases
—and one new one, after.sh
.
.
..
Homestead.yaml
after.sh
aliases
From the internal after.sh
docs:
If you would like to do some extra provisioning you may
add any commands you wish to this file and they will
be run after the Homestead machine is provisioned.
IMPORTANT: At the time of writing this, I don't know how to upgrade to the new version of Homestead without it just creating a new Homestead box from scratch. You won't lose your old box (you can always access the old way), but your new box will be fresh, not imported from the old one. Do you know how? Please let me know on Twitter: @stauffermatt
Most of you reading this already use Homestead. So, what does the upgrade path look like?
For starters, install it globally and make sure the Composer bin is in your PATH (like above).
Now, run homestead init
anywhere in your terminal. You should see the following output:
○ homestead init
Creating Homestead.yaml file... ✔
Homestead.yaml file created at: /Users/mattstauffer/.homestead/Homestead.yaml
If you check out the new ./homestead
directory, you’ll see your skeleton config file. Now just copy over your old aliases
file and Homestead.yaml
file; for example:
cp ~/OldHomesteadDirectory/Homestead.yaml ~/.homestead
cp ~/OldHomesteadDirectory.aliases ~/.homestead
Now add a chunk at the bottom of your Homestead.yaml
file that looks like this:
databases:
- homestead
Now, you should be able to run homestead up
from anywhere and see the first provision. Note, like I wrote above: This is creating a new Homestead box from scratch (using your Homestead.yaml
file), not importing your old one. So you'll have to re-migrate and -seed your databases, etc.
Caveats: The first time I ran
homestead up
with the new version, I got a lot of errors. I upgraded my Homestead box (homestead update
), and the next time I provisioned I didn't get any errors. I don't know whether the fix was just running it twice, or upgrading the machine image, but either way it all runs fine now.
That's it. Migrating might be a bit of a pain (although this might be a motivation to create better migrations and seeds ;) ), but this is a much cleaner system with a much clearer upgrade path. Much rejoicing!
]]>This time around, I decided to clean it up a bit according to my most recent coding standards. I’ve been on a PSR-2 kick lately, so I figured why not finally try out fabpot’s PHP-CS-Fixer.
Note: I wrote this post Friday, and was just getting around to editing and posting it today--and then I saw that it reached 1.0 today. Great timing!
You can use Composer’s global require
to install php-cs-fixer:
$ composer global require fabpot/php-cs-fixer @stable
Make sure your Composer vendor bin is in your path (in .bash_profile
or .zsh_profile
or whatever):
export PATH="$PATH:$HOME/.composer/vendor/bin"
If you’re using a Mac, you can also just install it with Homebrew:
$ brew tap josegonzalez/homebrew-php
$ brew install php-cs-fixer
Check out the Github page for full installation instructions for these and other methods.
By default, it runs “all PSR-2 fixers and some additional ones.” You can toggle the level you want to run with the --level
flag, which I’ll be setting to psr2
so that the “additional” checks, which are targeted at Symfony and go above-and-beyond PSR2, don’t throw me off. (It runs the entire stack by default, which is called level “symfony” and includes things like “Align equals signs in subsequent lines”).
Let’s try it out! First we’ll do a non-changing dry run to see which files it’s going to change:
$ cd Sites/confomo
$ php-cs-fixer fix ./ --level=psr2 --dry-run
... and the result:
1) app/commands/GrabTwitterProfilePicsCommand.php
2) app/config/app.php
3) app/config/auth.php
(... and more, trimmed for blog post)
Fixed all files in 14.968 seconds, 5.250 MB memory used
Looks like about everything. Note that this doesn't show me what will change, but just a list of which files will change. OK, let’s go:
$ php-cs-fixer fix ./ --level=psr2
Ahh… the sweet, sweet sound of thousands of tabs converting to spaces. Well, most of them. There are quite a few file types and structures that PHP-CS-Fixer had a little trouble parsing, so I had to still go in and do quite a bit of manual cleanup afterwards--but it got the job started for me.
Like every good tool, PHP-CS-Fixer has a dotfile for configuration. It's going to be a file named .php_cs
that's actually a PHP file, an instance of the SymfonyCSConfigInterface
. Check out this example from the docs:
<?php
$finder = Symfony\CS\Finder\DefaultFinder::create()
->exclude('somedir')
->in(__DIR__)
;
return Symfony\CS\Config\Config::create()
->fixers(array('indentation', 'elseif'))
->finder($finder)
;
You can configure the levels, the "fixers", the files, and the directories you want to analyze through that file.
That’s it!
Choose your favorite method of installing, choose your level of sniffing, do a dry run to see what files it’ll fix, run it, rejoice, and read the docs for more configuration options.
Questions? Comments? Hit me up on Twitter at @stauffermatt.
I was curious about how much of PSR-2 really got implemented as a result of PHP-CS-Fixer's PSR-2 toggle. I could've read through the checks, but instead I made the ugliest, most non-psr-1 and non-psr-2 compliant script I could, and ran it through it.
<?php
use Exception, Awesome;
namespace Awesome\Stuff;
class bad_class
extends \Exception{
const camelCase = 'abc';
public $awesome = 'unvisible'; public $_great = 'fabulous';
function Do_something_snaked_cased(){
// This is one really frigging long line. I wonder if PHP-CS-Fixer will trim this really frigging long line? It says no hard limit so I think it won't.
if( isset ($abc) || TRUE )
{
// do stuff
}
}
final static public function woop() {}
function _invisible( $stuff = [] , $other_stuff ) {
switch($stuff)
{
case 0:
echo 'stuff';
break;
}
}
}
echo "echo";
?>
I ran it and got the following:
<?php
use Exception;
use Awesome;
namespace Awesome\Stuff;
class test
extends \Exception
{
const camelCase = 'abc';
public $awesome = 'unvisible';
public $_great = 'fabulous';
public function Do_something_snaked_cased()
{
// This is one really frigging long line. I wonder if PHP-CS-Fixer will trim this really frigging long line? It says no hard limit so I think it won't.
if (isset($abc) || TRUE) {
// do stuff
}
}
final public static function woop()
{
}
public function _invisible($stuff = [], $other_stuff)
{
switch ($stuff) {
case 0:
echo 'stuff';
break;
}
}
}
echo "echo";
As you can see, it cleaned up my indendation (original was tabs, although it seems the code embedder for my blog is converting them to spaces on display), my use block, my line spacing, my brace spacing, the spaces around parentheses, and the closing ?>
. It didn't help with case (camel v upper v studly v snake), leading underscores, switch indendation, paremeter orders, capitalization of TRUE/FALSE/NULL, split line class definition, and probably a few other I didn't notice.
It makes sense: It can't make fundamental changes to the properties of your app. It's not smart enough to know, for example, where const camelCase
is used, so it can't feel comfortable changing it. So PHP-CS-Fixer changes the things it can change without breaking your code, which is great, but leaves anything else alone--meaning it's a good first step, but you still have to be running your own sniffers at the same time.
For a real answer and not this bogus mythbusters stuff I'm doing, check out the Github page for a listing of all the fixers PHP-CS-Fixer makes available.
]]>Thankfully, Laravel 5.0 vastly simplifies environment detection. In 4, you could have multiple environment files based on the environment name (.env.php
, .env.local.php
, etc.). In all honesty, I never used the environment-specific aspect; I imagine you could theoretically use it to commit all of your environment files to your repo. But since we don't commit any of our environment files, it was a useless distinction--and it forced the delayed loading of the environment file, because it couldn't be loaded until after the environment was detected.
Well, NO MORE. Laravel 5.0 is using PHP dotenv, a proven 3rd-party library that loads from a single .env
file.
Every Laravel app now ships with a default .env.example
file, which at the moment looks like this:
APP_ENV=local
APP_KEY=SomeRandomString
DB_USERNAME=homestead
DB_PASSWORD=homestead
In order to use this file, just copy it and name the copy .env
. Why aren't we renaming the original? You'll see in a second.
Now, you can edit your APP_ENV
--which, as you can tell from the default, is the primary way for us to set the application environment name. Check out the newer, simpler environment detection in bootstrap/environment.php
:
$env = $app->detectEnvironment(function()
{
return getenv('APP_ENV') ?: 'production';
});
That's a beautiful thing!
So, why are we copying the .env.example
instead of just renaming it? Well, imagine your app for a second. Imagine it has a consistent need for 10 environment variables to be defined. Sure, you'll have reasonable fallbacks if they're not defined, but it's still a better deal if you have them all.
Where are you going to store the directions for which variables each app's .env
file should set? You could store it in the readme, sure... or you could just update the .env.example
file to be the directions for which variables each install of your app should have.
That's it! Need 10 variables for each install? Add those 10 variables to your .env.example
file with sensible (or silly) defaults. This file will get committed to your source control, and then each new install can start out by running cp .env.example .env
and then customizing .env
.
You can learn more from the PHP dotenv docs, but here's a clever note: You can reference environment variables in later environment variables. Check out this example from their readme:
BASE_DIR=/var/webroot/project-root
CACHE_DIR=$BASE_DIR/cache
LOG_DIR=$BASE_DIR/logs
That's clever.
What if you want to ensure all the required variables are set up front, rather than waiting for the app to break when it accesses them?
Dotenv::required('DB_USERNAME');
// or
Dotenv::required(['DB_HOST', 'DB_NAME', 'DB_USERNAME', 'DB_PASSWORD']);
Done. If it's not defined, it'll throw a RuntimeException
.
Simple, easy, powerful. And this will completely invalidate all of my blog posts, workarounds, and complaints about environment detection in Laravel. Now it's simple to define your environment name and your environment variables in a single, consistent, predictable manner.
]]>Note: Event Annotations were eventually removed from core, and separated to a package maintained by the Laravel Community. The package should function the same as the documentation here, other than that it requires binding a custom service provider. Feedback can go to the Github issues for the project or to @artisangoose in the Larachat slack.
In 5.0, Laravel is moving more and more of the top-level, bootstrapped, procedural bindings and definitions into a more Object-Oriented, separation-of-concerns-minded structure. Filters are now objects, controllers are now namespaced, the PSR-4-loaded application logic is now separate from the framework configuration, and more.
We saw in the last post that annotations are one of the ways Laravel 5.0 is making this change. Where routes used to be bound one after another in routes.php, they now can be bound with annotations on the controller class and method definitions.
Another part of Laravel that has traditionally been bound with a list of calls one after another is event listeners, and this is the next target of the annotation syntax.
Consider the following code:
Event::listen('user.signup', function($user)
{
$intercom = App::make('intercom');
$intercom->addUser($user);
});
Somewhere in your code—in a service provider, maybe, or maybe just in a global file somewhere—you've bound a listener (the closure above) to the "user.signup" event.
Of course, you're probably noticing that all that closure does is call a single method—so we could refactor it to this:
Event::listen('user.signup', 'Intercom@addUser');
Now, let's drop the need for the binding entirely, and replace it with an annotation.
<?php namespace App;
class Intercom
{
/**
* @Hears("user.signup")
*/
public function addUser(User $user)
{
return $this->api_wrapper->sendSomeAddThing(
$user->email,
$user->name
);
}
}
As you can see, the @Hears
annotation can take a string event name, but it can also take an array of event names (in annotations, arrays are surrounded by {} instead of []).
You also have to add the name of your classes to the $scan
property on the EventServiceProvider
. So, open up App/Providers/EventServiceProvider.php
, find the $scan
array, and update it:
<?php
...
protected $scan = [
'App\Intercom'
];
Now, run artisan event:scan
and you'll get a file named storage/framework/events.scanned.php
, with the following contents:
<?php
$events->listen(array (
0 => 'user.signup',
), 'App\Intercom@addUser');
Instantly bound.
There are positives and negatives to working with your event system this way.
The primary negative I see is that you could look at this annotation as being framework-specific; if that's the case, you're now placing framework-specific code directly into your domain. If you imagine this Intercom class being something you're passing around between several sites, its binding may be specific to this site--in which case you'd be better off using the classic style of binding. However, that's not always the case.
Note that this negative is different from the same situation in Route Annotations, which are only being applied to Controllers--which are not domain objects.
The positives I can see at first glance are that first, you're defining the method's act of listening on the method itself, rather than elsewhere; and second, that you're defining the listener in a way that it can be programmatically accessed (meaning you could, at any point, replace artisan event:scan
with a program of your own devising that outputs something other than a Laravel events.scanned
file). There are likely smarter folks than me that'll weigh in on this.
Adding custom middleware to your Laravel app has actually been around for a while. For a great introduction to middleware, and how middleware worked in Laravel 4.1, check out Chris Fidao's HTTP Middleware in Laravel 4.1.
NOTE: Filters still exist in the codebase, so you can still use them, but middleware is becoming the preferred practice and way of thinking about decorating your routes.
Middleware is actually a little hard. Take a look at the graphic below, from StackPHP. If your application--your routing, your controllers, your business logic--is the green circle in the center, you can see that the user's request passes through several middleware layers, hits your app, and then passes out through more middleware layers. Any given middleware can operate before the application logic, after it, or both.
So, middleware is a series of wrappers around your application that decorate the requests and the responses in a way that isn't a part of your application logic.
(image attribution StackPHP.com)
The way this works is that middleware implements a decorator pattern: it takes the request, does something, and returns another request object to the next layer of the stack.
Laravel uses middleware by default to handle encrypting/decrypting and queueing cookies, and reading and writing sessions, but you can also use it to add any sort of layer you'd like to your request/response cycle: rate limiting, custom request parsing, and much more.
artisan make:middleware MyMiddleware
This will generate a simple middleware file:
<?php namespace App\Http\Middleware;
use Closure;
use Illuminate\Contracts\Routing\Middleware;
class MyMiddleware implements Middleware {
/**
* Handle an incoming request.
*
* @param \Illuminate\Http\Request $request
* @param \Closure $next
* @return mixed
*/
public function handle($request, Closure $next)
{
//
}
}
As you can see, the foundation of any middleware is the handle
method, which takes two parameters: $request
, which is an Illuminate Request object, and $next
, which is a Closure (anonymous function) that runs the request through the rest of the middleware stack.
Remember my absurd example of a ValidatesWhenResolved object that blocks odd request ports? Well, we're bringing it back, Middleware-style.
<?php namespace App\Http\Middleware;
use Closure;
use Illuminate\Contracts\Routing\Middleware;
class MyMiddleware implements Middleware {
/**
* Handle an incoming request.
*
* @param \Illuminate\Http\Request $request
* @param \Closure $next
* @return mixed
*/
public function handle($request, Closure $next)
{
// Test for an even vs. odd remote port
if (($request->server->get('REMOTE_PORT') / 2) % 2 > 0)
{
throw new \Exception("WE DON'T LIKE ODD REMOTE PORTS");
}
return $next($request);
}
}
There are two primary ways to bind middleware in Laravel 5. Both start with App\Http\Kernel
.
You'll notice that this new Kernel
class has two properties: $middleware
and $routeMiddleware
. Both are arrays of middleware; the middlewares in $middleware
run on every request and the middlewares in $routeMiddleware
have to be enabled.
At the time of this writing, five middlewares run by default:
protected $middleware = [
'Illuminate\Foundation\Http\Middleware\CheckForMaintenanceMode',
'Illuminate\Cookie\Middleware\EncryptCookies',
'Illuminate\Cookie\Middleware\AddQueuedCookiesToResponse',
'Illuminate\Session\Middleware\StartSession',
'Illuminate\View\Middleware\ShareErrorsFromSession',
'Illuminate\Foundation\Http\Middleware\VerifyCsrfToken',
];
and three are available as optional:
protected $routeMiddleware = [
'auth' = 'App\Http\Middleware\Authenticate',
'auth.basic' => 'Illuminate\Auth\Middleware\AuthenticateWithBasicAuth',
'guest' => 'App\Http\Middleware\RedirectIfAuthenticated',
];
As you can see, the optional routes that are available by default are the same as the filters that were optional by default, except that--importantly--CSRF protection has now been enabled by default for al routes..
So, let's start by running our middleware on every request. Simple add it to $middleware
:
protected $middleware = [
'App\Http\Middleware\MyMiddleware',
'Illuminate\Foundation\Http\Middleware\CheckForMaintenanceMode',
'Illuminate\Cookie\Middleware\EncryptCookies',
'Illuminate\Cookie\Middleware\AddQueuedCookiesToResponse',
'Illuminate\Session\Middleware\StartSession',
'Illuminate\View\Middleware\ShareErrorsFromSession',
'Illuminate\Foundation\Http\Middleware\VerifyCsrfToken',
];
... and now it'll run on every request.
OK, now let's move our custom middleware to the optional stack, with a key:
protected $routeMiddleware = [
'auth' = 'App\Http\Middleware\Authenticate',
'auth.basic' => 'Illuminate\Auth\Middleware\AuthenticateWithBasicAuth',
'guest' => 'App\Http\Middleware\RedirectIfAuthenticated',
'absurd' => 'App\Http\Middleware\MyMiddleware',
];
And now we can apply it using the $this->middleware()
method on the base Controller
or in routes.php
.
Note: Annotations are no longer a part of Laravel 5 core, so middleware route annotation is no longer supported without using an external package.
You can annotate a controller or a route to use specific middleware:
/**
* @Resource("foobar/photos")
* @Middleware("auth")
* @Middleware("absurd", except={"update"})
* @Middleware("csrf", only={"index"})
*/
class FoobarPhotosController
{}
You can annotate a single controller method:
/**
* @Middleware("auth.basic")
*/
public function index() {}
Or, you can use the $this->middleware()
method on any controller (or its methods) if the controller extends the base controller:
...
use Illuminate\Routing\Controller;
class AwesomeController extends Controller {
public function __construct()
{
$this->middleware('csrf');
$this->middleware('auth', ['only' => 'update'])
}
}
You can also assign middleware to run on a route in routes.php
:
// Routes.php
// Single route
$router->get("/awesome/sauce", "AwesomeController@sauce", ['middleware' => 'auth']);
// Route group
$router->group(['middleware' => 'auth'], function() {
// lots of routes that require auth middleware
});
It took me a minute to follow this, but Taylor pointed out that the difference between a "before" middleware and an "after" middleware is based on whether the middleware's action happens before or after the request it's passed:
...
class BeforeMiddleware implements Middleware {
public function handle($request, Closure $next)
{
// Do Stuff
return $next($request);
}
}
...
class AfterMiddleware implements Middleware {
public function handle($request, Closure $next)
{
$response = $next($request);
// Do stuff
return $response;
}
}
As you can see, the before middleware operates and then passes on the request. The after middleware, on the other hand, allows the request to be processed, and then operates on it.
If you're not familiar with it, it might take a minute to get your head wrapped around the concept of middleware. Filters are a little easier a jump from our normal thinking about controller route requests. But middleware—the concept of the stack which passes along only a request, allowing it to be decorated piece by piece--is cleaner, simpler, and more flexible.
Not only that, but middleware is just one more way of working with your request in a way that is both powerfully effective in your Laravel apps, but plays nicely elsewhere else. The Laravel 5.0 middleware syntax isn't perfectly compatible with StackPHP syntax, but if you structure your request/response stack along the organizational structure of middlewares it's a further work in the direction of separation of concerns--and modifying a Laravel Middleware to work in a separate, StackPHP-style syntax, would take minimal effort.
Questions? Comments? I'm @stauffermatt on Twitter.
]]>Note: Route Annotations were eventually removed from core, and separated to a package maintained by the Laravel Community. The package should function the same as the documentation here, other than that it requires binding a custom service provider. Feedback can go to the Github issues for the project or to @artisangoose in the Larachat slack.
If you're not familiar with how (or why) annotations exist, I'd suggest checking out Rafael Dohms' talk PHP Annotations: They Exist!. In short, annotations are notes about your code that live in the DocBlock. But PHP has the ability to read and parse these notes, and so you can use them to give your code directions. Opinions on them are varied, but they've come to Laravel to stay.
One of the difficulties on Laravel sites--especially larger sites--is mentally mapping your routes to your controller methods.
Let's assume we're not using route Closures (because it's not the best practice and because we won't be able to take advantage of Laravel 5.0's route caching) and we're not using Implicit or Resource Controller routes, so all of our routes are going to be mapped explicitly to a controller method, somewhere.
So, we have something like this (note that Laravel 5.0 prefers $router->get
instead of Route::get
):
// routes.php
$router->get('awesome-sauce/{id}', [
'as' => 'sauce',
'uses' => 'AwesomeController@sauce'
]);
<?php namespace App\Http\Controllers;
class AwesomeController {
public function sauce($id) {}
}
...but imagine having dozens or hundreds of those links. What if we were able to make a more direct linkage? Say, if we were able to determine the route in the controller? Bum bum bum...
Note: Laravel 5.0 uses POPO (Plain old PHP Objects) for controllers instead of children of the \Controller class. More on this later.
OK, it's clear what I'm leading up to here. Check it out:
<?php namespace App\Http\Controllers;
class AwesomeController {
/**
* @Get("/awesome-sauce/{id}", as="sauce")
*/
public function sauce($id) {}
}
... that's it.
One more step. Open up App/Providers/RouteServiceProvider.php
, and add App\Http\Controllers\AwesomeController
to the $scan
array:
...
protected $scan = [
'App\Http\Controllers\HomeController',
'App\Http\Controllers\Auth\AuthController',
'App\Http\Controllers\Auth\PasswordController',
'App\Http\Controllers\AwesomeController'
];
Run artisan route:scan
and it'll automatically generate your route file at storage/framework/routes.scanned.php
. It'll have a lot of default routes, but here is your new route down at the bottom:
<?php
...
$router->get('awesome-sauce/{id}', [
'uses' => 'App\Http\Controllers\AwesomeController@sauce',
'as' => 'sauce',
'middleware' => [],
'where' => [],
'domain' => NULL,
]);
You're now determining your routes inline, using annotations, without touching routes.php
. DONE.
Note that there are two places you can determine your route annotations: on the controller and on the method (or both). Check out the following controller (from the framework tests, but modified for demonstration):
<?php namespace App\Http\Controllers;
/**
* @Resource("foobar/photos", only={"index", "update"}, names={"index": "index.name"})
* @Controller(domain="{id}.account.com")
* @Middleware("FooMiddleware")
* @Middleware("BarMiddleware", except={"update"})
* @Middleware("BoomMiddleware", only={"index"})
* @Where({"id": "regex"})
*/
class BasicController {
/**
* @Middleware("BazMiddleware")
* @return Response
*/
public function index() {}
/**
* @return Response
*/
public function update($id) {}
/**
* @Put("/more/{id}", after="log")
* @Middleware("QuxMiddleware")
*/
public function doMore($id) {}
}
Notice that some annotations are set on the controller and others on the methods. Also note the new emphasis on Middleware (and the absence of Before and After); I'll be writing a post soon about the new ways we'll be using Middleware.
Here are a few more options and use cases:
Use the verbs you're used to using in your routes file to annotate simple routes.
<?php namespace App\Http\Controllers;
class BasicController {
/**
* @Get("awesome")
*/
public function awesome() {}
/**
* @Post("sauce/{id}")
*/
public function sauce($id) {}
/**
* @Put("foo/{id}", as="foo")
*/
public function foo($id) {}
}
Note that you can define a resource route with @Resource("route-name")
; you can choose which routes are shown with only={"method1", "method2"}
; and you can name routes with names={"method": "name-for-method"}
.
<?php namespace App\Http\Controllers;
/**
* @Resource("foobar/photos", only={"index", "update"}, names={"index": "index.name"})
*/
class FoobarPhotosController
{
public function index()
{
// Index, named as index.name
}
public function update()
{
// Update, un-named
}
}
Just like in a normal route definition, annotations can control Sub-Domain Routing:
<?php namespace App\Http\Controllers;
/**
* @Controller(domain="{user-name}.my-multi-tenant-site.com")
*/
class MyStuffController
{
// Do stuff
}
Laravel 5.0 replaces Before and After Filters with Middleware; check back soon for a post introducing how the new implementation of Middleware works.
<?php namespace App\Http\Controllers;
/**
* @Middleware("FooMiddleware")
*/
class MiddlewaredController
{
/**
* @Middleware("BarMiddleware")
*/
public function barred() {}
}
You can apply route constraints, as well:
<?php namespace App\Http\Controllers;
class RegexedController {
/**
* @Where({"id": "regex"})
*/
public function show($id) {}
}
If your Environment is detected as local
, Laravel will auto-scan your controllers on every page view. That way you don't have to artisan route:scan
every time you make a change.
Since I originally wrote this article, the default routes.php
has been removed from the default project. In order to bring it back, edit App\Providers\RouteServiceProvider
, and in the map()
method, un-comment the line that says require app_path('Http/routes.php')
. Now you can just create App/Http/routes.php
and use it like you used to.
You can still use routes.php
if it makes you more comfortable--or if you don't see the value behind this.
Once again, this new Laravel 5.0 feature both opens up new possibilities, and in my mind also helps us to write cleaner, better architected code. Since routes.php is simply a map between URL routes and controllers, route annotations moves the mapping into the controller and removes the need for a separate routes file entirely.
]]>// main.scss
.context {
@import 'embed';
font-size: 42px;
}
// _embed.scss
.child {
color: red;
}
Produces:
.context {
font-size: 42px;
}
.context .child {
color: red;
}
This is a really clever trick; I've only ever used @import
to pull in things to the top level of the document. Unfortunately, I can't use it, for two reasons:
First, we don't like to use descendant selectors. Like I've written before, we use BEM instead of descendant selectors in our CSS.
Second, we've actually tried, and failed, to use style namespaces. For the most significant example, we tried namespacing the primary content section of our pages under .content
. It seemed really clever, because we could then isolate the styles we were applying there to just apply in that context.
The problem was, every selector within that namespace instantly had an increased specificity. It was no longer h1
--it was now .content h1
, which means you could no longer style that h1 later by adding a single class like .news-title
, because .news-title
isn't as specific as .content h1
. So, you'd have to write .news-title, .content .news-title
just to make it work. It became a huge mess.
Along comes BEM, to save the day. So, I thought, why can't we use Trey's trick for BEM? Turns out we can.
// main.scss
.context {
@import 'embed';
font-size: 42px;
}
// _embed.scss
&__child {
color: red;
}
Produces:
.context {
font-size: 42px;
}
.context__child {
color: red;
}
Granted, I'm not sure if this is even useful--why would you want to separate out the children elements and modifiers of a BEM module from their parent block? But maybe there are contexts where you'd want to. So, now you know: You can do it.
]]>One of the first things PHP developers learn as they start growing in modern coding practices is to use dependency injection in order to follow the D in SOLID: Dependency Inversion.
Laravel's Container is called an IOC ("Inversion of Control") Container, and that's the case because it allows your control to happen at the top level of the app: you ask in your low-level code (controllers, implementation classes, etc.) for an instance of "mailer", and the container gives you one. Your low-level code doesn't care about which service is actually sending your mail--Mandrill? Mailgun? Sendmail? It doesn't matter, as long as the interface to the mailer class is the same.
Here's a quick sample of traditional dependency injection.
...
class Listener
{
protected $mailer;
public function __construct(Mailer $mailer)
{
$this->mailer = $mailer;
}
public function userWasAdded(User $user)
{
// Do some stuff...
$this->mailer->send('emails.welcome', ['user' => $user], function($message)
{
$message->to($user->email, $user->name)->subject('Welcome!');
});
}
}
As you can see, we inject the Mailer class into the object using the constructor. And Laravel's Container makes it simple to instantiate this class, because it automates injection into the constructor. Notice that I can creat a new Listener without passing in a Mailer; that's because Laravel resolves it for me, and injects it in.
$listener = App::make('Listener');
This is great because A) I can now make that decision about which Mailer I want once in the app, rather than every time, and B) this makes testing this class much easier.
But what if you only need to use the injected class in a single method? Your constructor can get quite cluttered with single use injections.
Or what if you need to perform a particular action upon injection, but only want it to operate on that particular method? (FormRequests and ValidatesUponResolved)
Intro method injection: It's just like constructor injection, but it allows you to inject dependencies right into your methods--when those methods are called by the Container.
My guess is that the most common use case for method injection will be controllers. Like I mentioned above, the new FormRequests are a perfect example. But that's already been documented, so let's look at something else.
...
class DashboardController extends Controller
{
public function showMoneyDashboard(MoneyRepository $money)
{
$usefulMoneyStuff = $money->getUsefulStuff();
return View::make('dashboards.money')
->with('stuff', $usefulMoneyStuff);
}
public function showTasksDashboard(TasksRepository $tasks)
{
$usefulTasksStuff = $tasks->getUsefulStuff();
return View::make('dashboards.tasks')
->with('stuff', $usefulTasksStuff);
}
public function showSupervisionDashboard(SupervisionRepository $supervision)
{
$usefulSupervisionStuff = $supervision->getUsefulStuff();
return View::make('dashboards.supervision')
->with('stuff', $usefulSupervisionStuff);
}
}
Since public controller methods are called by the Container (when you map a route to them and the user visits that route), these dependencies will be auto-injected as soon as you hit that route. Nice and clean.
So, we now know that controller methods are resolved by the Container. ServiceProvider's boot
methods are, too.
But you can arbitrarily choose to have the Container resolve any method you'd like.
...
class ThingDoer
{
public function doThing($thing_key, ThingRepository $repository)
{
$thing = $repository->getThing($thing_key);
$thing->do();
}
}
... and we can call it from our Controller using App::call()
, which optionally can take a second parameter which is an array of parameters:
<?php namespace App\Http\Controllers;
use Illuminate\Contracts\Container\Container;
use Illuminate\Routing\Controller;
class ThingController extends Controller
{
public function doThing(Container $container)
{
$thingDoer = $container->make('ThingDoer');
// Calls the $thingDoer object's doThing method with one parameter
// ($thing_key) with a value of 'awesome-parameter-here'
$container->call(
[$thingDoer, 'doThing'],
['thing_key' => 'awesome-parameter-here']
);
}
}
Method injection is, at its core, an enabler of some helpful system features like FormRequest--but don't let that stop you from using it. It's just one more way to clean up your code. And we all need cleaner code.
]]>That means that you can write your app just like you did using local file storage:
/**
* Save a thing
* @param Thing $thing
* @param string $filename
*/
public function saveThing(Thing $thing, $filename)
{
File::put('uploads/' . $filename, $thing);
}
But now you can, at any point, change your production app settings to use an external host (we'll use s3 in our example) instead, without changing a line of your business logic.
First, you have to add the cloud provider's dependency to composer.json; for s3, it's the AWS SDK (aws\aws-sdk-php
).
$ composer require aws\aws-sdk-php
Then, edit config/filesystems.php
(or config/production/filesystems.php
, so you're only configuring for the production site), change the default driver from local
to s3
, and then add your s3 credentials to the s3
section of the disks array:
return [
'default' => 's3',
'disks' => [
's3' => [
'driver' => 's3',
'key' => 'fslkfqweoirqew',
'secret' => '24j12oin12oi5nio251',
'bucket' => 'my-awesome-website-bucket'
]
]
];
Uniquely, the Filesystem config has two defaults: The Filesystem default (which is injected when you typehint Illuminate\Contracts\Filesystem\Filesystem
, and also bound to the Container as filesystem.disk
) and the Cloud default (which is injected when you typehint Illuminate\Contracts\Filesystem\Cloud
, and also bound to the Container as filesystem.cloud
). This way you can have any given environment have a default local and a default cloud filesystem config.
If you're using the façade, you're going to get the default default by default, rather than the cloud default. (Say that five times fast...)
Illuminate\Contracts\Filesystem\Filesystem
) instead of using the façade if you're accessing files anywhere other than your controllers.Once you've installed the AWS SDK and edited filesystems.php
, all of your file operations are now happening against your s3 account. That's it! No extra work, no extra steps: you're up and running, storing and accessing files in the cloud like a pro. Way to go.
sudo apt-get update && sudo apt-get upgrade
) and I ended up with a white screen on all of my sites. I quickly found that it was PHP not serving correctly (MySQL and Nginx were both working fine), and after sending a help request to Taylor I learned that it's due to a breaking change in Nginx.
Thankfully, there's a quick fix to that, and even better, Taylor has now introduced Global Forge Recipes (tweet) to make fixes like this much simpler.
Forge has always had Recipes, which allow you to save shell scripts that you'd like to apply frequently or across all of your machines. But now, below the "Your Recipes" section, there's a "Forge Recipes" section, which are official recipes provided by Taylor.
The first such recipe is "Update Nginx FastCGI Parameters", which addresses the issue I mentioned above.
To run a recipe, just click the green arrow next to it, choose which server you want to run it on, and click "Run." Once the recipe has completed, Forge will email you a Recipe Report. You can also click the i (info) button to read a description of the recipe and its actual contents.
This particular recipe either fixes your server (if you had the same error I did) or prepares your server so it will never break in the future. As such, I'd recommend running it on all of your Forge-managed servers.
Forge's Official Recipes provide Forge users with formal, official recipes that are written and vetted by Taylor, and an easy mechanism to deploy them on single servers or all servers. I'd recommend running "Update Nginx FastCGI Parameters" on all of your servers today.
]]>A dangerous vulnerability in bash, a shell that's enabled by default on pretty much every *nix ystem ever. Learn more here. In short, it's bad but it's wildly easy to fix.
UPDATE: Ubuntu released a patch to fix this vulnerability after I wrote this post, and since Forge auto-applies security fixes nightly, all Forge-managed servers are now safe. You can read on for fun, but you're now safe.
It's likely going to be automatically fixed in an Ubuntu security update soon, but if you want to manually update your Forge-managed servers (or any other Ubuntu servers)--I would recommend this--just SSH into your server and run the following:
$ sudo apt-get update && sudo apt-get install --only-upgrade bash
This will get an updated list of available packages (apt-get update
) and then just upgrade bash. It wouldn't hurt to reboot your server afterwards, although it's not necessary--you can do this through Forge or by running sudo reboot
on your server.
Per this tweet, even this bash patch might not be ENOUGH--but it's better to apply and keep your eyes on the bug than to not apply.
You can also run the following to check whether your server is even vulnerable:
$ env x='() { :;}; echo vulnerable' bash -c "echo this is a test"
If you see the following output, your server is vulnerable:
vulnerable
this is a test
If you see any other output, likely the following, your server is safe:
bash: warning: x: ignoring function definition attempt
bash: error importing function definition for `x'
this is a test
]]>Whether or not you were aware of it, the routing logic in Laravel 4 and earlier--especially as you have more and more routes in your application--were one such place for performance bottlenecks. A site with just a few hundred routes could, in the past, lose up to a half second just for the framework to register those routes. Fear no more, as Laravel 5 introduces route caching, optimizing the performance of your routes (except Closure routes, so it's time to move them all to controllers).
There's not a lot to using this feature, honestly. There's a new Artisan command, route:cache
, which serializes the results of your routes.php
file--it's performing the operation of parsing the routes once and then storing those result. Sort of like pre-compiling a Handlebars template, if you've ever done that before.
That's it! Now your routes are being parsed from the cached file, not your routes file. You can make all the changes you want to routes.php
and the routing of your app won't change until you re-cache.
The pros are pretty clear: your site gets faster.
The cons, however, need to be noted: Once you cache the site's routes once, you'll now have to re-cache your routes every time you make any changes to routes.php
, or the changes won't show up. This could cause confusion for new developers, or even for you if you just happen to forget you were using caching.
Thankfully, there are two things that can help you here. First, you can run artisan route:clear
, and artisan will delete your route cache. Second, you can consider only caching on your production server. Maybe only run artisan route:cache
as a post-deploy hook in Git, or just run it as a part of your Forge deploy process.
The performance benefit of route caching might not be worth the potential confusion for you. If so, you can pretend this doesn't exist. But for groups with more managed deploy processes, the idea of something this simple trimming off a half second or more of load time on every page is huge.
]]>Forge now has a feature called Circles, which allows you to choose a group of people (Forge paid users or not; non-users will need to create a free Circles-only Forge account) who have access to one or more of your servers. At the same time this feature was added, a new plan called Forge Plus was added, which is the basic Forge (which is now limited to 5 servers) plus Circles and unlimited servers.
A circle is a group of one or more users that you've grouped for the purpose of granting them access to some or all of your servers.
Each user in a circle will be able to do anything to any server (add sites, admin sites, delete sites, etc.) other than delete or archive the server. They won't see your billing, or your authentication/API information to the host, but they'll be able to administer the servers and sites fully.
Like I mentioned above, you now need to upgrade to a Forge Plus account ($20/mo instead of $10/mo) if you want to use Forge to manage more than 5 servers. I think this is a great move, because it leaves the majority of independent developers on the lowest plan, but allows business owners like myself to start paying for the greater level of use we get from Forge.
Another new update that came through with this change is a universal Sites dropdown, which allows you to navigate to a certain site without navigating to its server first. As you can see, it shows the site name, with the server name in parentheses:
Despite the limitation of only 5 servers for the entry level plan, you can still administer unlimited sites across those servers.
If you have more than 5 servers already, you will be able to continue to use your existing servers, but you will have to upgrade in order to add any more. [source]
If your invitees can't visit the "My Circles" page in their account to accept an invitation, but instead are redirected to the Connect page, have them authenticate their Github account, skip the step about adding a Server Provider, and then their account will be allowed access to the My Circles page.
You can administer your circles by choosing "My Circles" from the account dropdown:
This is a great move for Forge, and one I'll benefit from greatly. Now, rather than having to share my individual login information with all of my developers--and our contractors--I can now choose specific permissions for each developer based on the project, and I can let them do it using their own logins instead of sharing my own password around. Good stuff.
]]>I love it. The directory structure has been modified to now better reflect how a majority of Laravel developers either work or recommend working, and this will reduce some of the pain of comprehending "best practices", and it makes the entire task of understanding Laravel simpler.
app
Commands
Console
Events
Handlers
Commands
Events
Http
Controllers
Middleware
Requests
Providers
Services
bootstrap
config
database
migrations
seeds
public
package
resources
lang
views
storage
cache
logs
meta
sessions
views
work
tests
Basically, the app
directory has been trimmed down--and also boosted a little. Where it used to be a more classic Rails/CodeIgniter-style directory that holds all of your application's logic and framework config and templates and persistence and everything else, it's now primarily trying to hold your application's logic (including the business "domain")--and it's loading it all as PSR-4 classes.
As a result, Laravel-specific configuration details are now in their own directory. Resources--language and views--are now in their own directory. Database-related information is now in its own directory.
Note: There is a Legacy Service Provider (see the docs here) that'll allow you to serve a 5.0+ app from a 4.2- directory structure, so upgrading older Laravel apps to 5.0 won't require changing to the new directory structure.
So, why is this actually an improvement?
For starters, we're separating our concerns. The app directory was previously a bit of a catchall for pretty much all code other than frontend code. Now, it contains the core logic of your app--fittingly--and some of the particular implementation details live elsewhere.
Additionally, it's been considered best practice for quite a while to have an "App" style top level namespace for your domain logic. Getting started on a new project, for many of us, was at the very least deleting the models directory, adding a namespace folder named after our app, and PSR-4 autoloading that namespace. Now that's a native part of the folder structure, and it just got a lot easier to namespace Controllers and other aspects of your more framework-related code.
Finally, a lot of the code that used to be in procedural files (filters, for example) is now moved to classes and Service Providers. This makes execution easier to predict, reduces the amount of procedural code, and encourages more userland (i.e. "in our code, not in the framework") usage of Service Providers.
It's a little too extreme to say that the code in your app directory should be framework independent; controllers, filters, commands, and service providers will extend Laravel classes, and all of your classes may inherit from or receive injections of Laravel classes. But, this change goes a long way to moving the primary logic of your applications into PSR-4 loaded classes that could theoretically exist independent of Laravel.
If it's a class, or could be a class, it should go somewhere in app/
. If it's an Eloquent model, it should go somewhere in app/
. If it has to do with the traffic of your request through a web server (e.g. Controllers, FormRequests), it should go in app/Http
. If it has to do with CLI (command line interface) requests, it should go in app/Console
. If you would've put it in routes.php (but it isn't a route), or in start.php in the past, it should go into a Service Provider. And if it's a filter, it should now be its own class in app/Http/Filters
.
Other than that, it should be pretty clear.
By default, every Laravel app has a "namespace" that represents the top-level namespace for the app's classes. Out of the box it defaults to "App", and it maps directly to the app/
folder directly via PSR-4.
But you can easily rename the namespace with an artisan command that will also replace all instances of "App/" (in namespace declarations in Laravel classes) with your new namespace.
So if I was starting Confomo again, I'd create the new Laravel install, and then instantly run the artisan command to rename the namespace:
$ php artisan app:name Confomo
Now all of the default-included classes in the /app
directory are namespaced to Confomo; the PSR-4 autoloading statement in composer.json
is updated; and Laravel knows to look for its filters, controllers, etc. in that namespace.
The new app structure and the app namespacing in Laravel 5.0 are helping us, step by step, increase the overall quality, consistency, and flexibility of our code. I like it.
Did I miss anything? I'm @stauffermatt on Twitter.
]]>Laravel 5.0 is coming out in November, and there are a lot of features that have folks excited. The New Directory structure is, in my mind, a lot more in line with how most developers work; Flysystem integration will make working with files endlessly more flexible and powerful; Contracts is a great step towards making Laravel-friendly packages that aren’t Laravel-dependent; and Socialite looks about 100x easier than Opauth. Also, Method Injection opens up a lot of really exciting opportunities.
One of the most valuable aspects of Laravel for me is that it allows for rapid app development. Laravel, and other frameworks like it, automate out the repetitive work that you have to do on every project. And a lot of newer features have been focusing on this. Cashier, and now Socialite and Form Requests.
If you have ever tried to figure out the best practices for validation in Laravel, you’ll know that it’s a topic of much discussion and little agreement. Validate in the controller? In a service layer? In the model? In a custom validation wrapper? In Javascript (NO JUST KIDDING THAT’S NEVER OK)?
Laravel’s new Form Request feature provides both standardization (“best practice” ish) and also convenience (this is more powerful and convenient then all prior options) to the process of validating and authenticating in Laravel.
NOTE: In this post I'm using the new
view()
helper instead ofView::make()
.
Laravel 5.0 introduces Form Requests, which are a special type of class devoted to validating and authorizing form submissions. Each class contains at least a rules()
method which returns an array of rules and an authorize()
method which returns a boolean of whether or not the user is authorized to perform their request.
Laravel then automatically passes the user's input into the request before parse through the POST route, meaning our validation can now be moved entirely into FormRequest objects and out of our controllers and models.
If you don't have one yet, create a 5.0 project using the following command:
$ composer create-project laravel/laravel my-awesome-laravel-4-3-project-omg dev-develop --prefer-dist
Let’s imagine we’re going to be allowing a user to add a friend to our contact manager.
app/Http/routes.php
<?php
Route::get('/', 'FriendsController@getAddFriend');
Route::post('/', 'FriendsController@postAddFriend');
app/Http/Controllers/FriendsController:
<?php namespace App\Http\Controllers;
use App\Http\Requests\FriendFormRequest;
use Illuminate\Routing\Controller;
use Response;
use View;
class FriendsController extends Controller
{
public function getAddFriend()
{
return view('friends.add');
}
public function postAddFriend(FriendFormRequest $request)
{
return Response::make('Friend added!');
}
}
resources/views/friends/add.blade.php
<html><body>
@foreach ($errors->all() as $error)
<p class="error">{{ $error }}</p>
@endforeach
<form method="post">
<label>First name</label><input name="first_name"><br>
<label>Email address</label><input name="email_address"><br>
<input type="submit">
</form>
</body></html>
app/http/requests/FriendFormRequest.php
<?php namespace App\Http\Requests;
use Illuminate\Foundation\Http\FormRequest;
use Response;
class FriendFormRequest extends FormRequest
{
public function rules()
{
return [
'first_name' => 'required',
'email_address' => 'required|email'
];
}
public function authorize()
{
// Only allow logged in users
// return \Auth::check();
// Allows all users in
return true;
}
// OPTIONAL OVERRIDE
public function forbiddenResponse()
{
// Optionally, send a custom response on authorize failure
// (default is to just redirect to initial page with errors)
//
// Can return a response, a view, a redirect, or whatever else
return Response::make('Permission denied foo!', 403);
}
// OPTIONAL OVERRIDE
public function response()
{
// If you want to customize what happens on a failed validation,
// override this method.
// See what it does natively here:
// https://github.com/laravel/framework/blob/master/src/Illuminate/Foundation/Http/FormRequest.php
}
}
Now, spin up a server with php artisan serve
or your favorite method. Submit the form and you can see our validation rules working without adding a line of validation logic to our controllers.
What about if we have different rules based on add vs. edit? What if we have conditional authorization based on the input? Here are a few examples, although we haven't yet established "best practices" on all of these.
There's nothing stopping you from having two (or more) separate form request classes for add and edit. You could create FriendFormRequest
with all the rules, and then extend it to make addFriendFormRequest
or editFriendFormRequest
or whatever else, and each child class can modify the default behavior.
The benefit of rules()
being a function instead of just a property is that you can perform logic in rules().
<?php
...
class UserFormRequest extends FormRequest
{
...
protected $rules = [
'email_address' => 'required',
'password' => 'required|min:8',
];
public function rules()
{
$rules = $this->rules;
if ($someTestVariableShowingThisIsLoginInsteadOfSignup)
{
$rules['password'] = 'min:8';
}
return $rules;
}
}
You can also perform logic in authorize. For example:
<?php
...
class FriendFormRequest extends FormRequest
{
...
public function authorize()
{
if ( ! Auth::check() )
{
return false;
}
$thingBeingEdited = Thing::find(Input::get('thingId'));
if ( ! $thingBeingEdited || $thingBeingEdited->owner != Auth::id()) {
return false;
}
return true;
}
}
Or, if you want a greater level of control for all of this, you can actually overwrite the method that provides the Validator instance. I will be expanding this section of this blog post shortly.
<?php
...
class FriendFormRequest extends FormRequest
{
public function validator(ValidationService $service)
{
$validator = $service->getValidator($this->input());
// Optionally customize this version using new ->after()
$validator->after(function() use ($validator) {
// Do more validation
$validator->errors()->add('field', 'new error');
});
}
}
I'll be writing more on this in a new blog post soon, but the concept of validating methods/routes/etc. when the IOC resolves something is now a separated to an interface: https://github.com/illuminate/contracts/blob/master/Validation/ValidatesWhenResolved.php
$redirect
: the URI to redirect to if validation fails$redirectRoute
: the route to redirect to if validation fails$redirectAction
: the controller action to redirect to if validation fails$dontFlash
: the input keys that should not be flashed on redirect (default: ['password', 'password_confirmation']
)As you can see, Form Requests are powerful and convenient ways to simplify validation and authentication for form requests. Have trouble following this? Check out the Laracast for Form Request.
Since 5.0 is still under development, these things could change, or I may have missed something. Suggestions or corrections? Hit me up on Twitter.
]]>To be entirely honest with you, I haven't come up with a really clever use case for this code outside of FormRequests. But I hope documenting it here will allow people smarter than me to see if it brings any particularly useful possibilities.
So, if you read my last blog post, you know that FormRequest objects, when injected (via the IOC with dependency injection), can cancel execution of the method they're running on. If my form doesn't validate, the POST route for that form gets cancelled by my FormRequest class.
So, it turns out that that the aspect of the FormRequest that triggers the IOC container calling its validation on resolution is now available as a separate interface called ValidatesWhenResolved. Because of this, you can now build your own class that similarly intercepts the request prior to your controller (or non-controller, theoretically) method loading and can choose to pass or fail the validation.
NOTE: The route/method isn't actually cancelled on a failed validation. The FormRequest object simply throws an HTTP Exception, which then either gives an error JSON response or a redirect. Theoretically, you could do the exact same thing without the interface simply by throwing an exception in the constructor after validating. But the interface cleans it up a lot by performing the validation in an named method.
At the time of this post, this is what the interface looks like:
<?php namespace Illuminate\Contracts\Validation;
use Illuminate\Contracts\Container\Container;
interface ValidatesWhenResolved {
/**
* Validate the given class instance.
*
* @return void
*/
public function validate();
}
As you can see, we're only obligated to provid a validate()
method. And really, the benefit that this class provides--other than the additional knowledge we gain about a class purely by observing that it's fulfilling a particular contract--is that the validate()
method is called when it's resolved from the IOC container. So let's try creating our own non-FormRequest class that implements this interface.
<?php namespace App\Http\Controllers;
use App\Random\RandomAccess;
use Illuminate\Routing\Controller;
use Response;
class ValidatedController extends Controller
{
public function random(RandomAccess $ram)
{
return Response::make('You made it!');
}
}
OK, so now we have a route. Let's try a non-FormRequest class:
<?php namespace App\Random;
use Exception;
use Illuminate\Contracts\Validation\ValidatesWhenResolved;
use Illuminate\Http\Request;
class RandomAccess implements ValidatesWhenResolved
{
public function __construct(Request $request)
{
$this->request = $request;
}
public function validate()
{
// Test for an even vs. odd remote port
if (($this->request->server->get('REMOTE_PORT') / 2) % 2 > 0)
{
throw new Exception("WE DON'T LIKE ODD REMOTE PORTS");
}
}
}
Now that controller method is being intercepted randomly with an exception (depending on whether your request port is even or odd, which is perhaps the most useless example of all time).
As you can see, there's no magic happening here. Whether validate()
returns true or false doesn't matter. You could use the ValidatesWhenResolvedTrait to share some of the failedValidation()
workflow you have with FormRequest, but with the class I wrote above you're simply throwing an exception.
We can also use this elsewhere, and we can use a FormRequest-style validator using the ValidatesWhenResolvedTrait. I have yet to find a use case for this, though, so I'll leave this section short and simple. You could do it... but I don't yet know why you would. :)
I get it. You're not going to turn on a random exception toggler like my example. And in some ways, this may end up looking just like route filters. But I still suspect there's something really creative we could do here. Is there anything you're planning to inject into your controller anyway? Maybe make it implement this contract so it can auto-validate upon injection, rather than calling a validation method later.
As you can tell, I'm just fishing around here to see if I can find any clever or creative uses for this. Got any great ideas? Pass them along: @stauffermatt.
]]>First time in Amsterdam.
First time in The Netherlands, actually.
First time in Europe as an adult.
First time speaking at a conference not in the U.S.
First time at Laracon Eu.
First time eating (and burning my mouth on) Bitterballen.
First time speaking (a single word of) Dutch.
First time meeting dozens of incredible folks in person who I previously only knew online, and plenty of others who I didn't know at all.
Before it all fades away, I wanted to share a few reflections.
I'm actually pretty satisfied with how my talk went. You can view more information about it here:
joind.in reviews | slide deck | video
I spoke on bringing the best assets of Laravel to other projects--both the good things that Laravel has that aren't unique to Laravel, like coding standards and design patterns, and also the things Laravel uniquely brings, including Illuminate components.
I spoke too quickly, partially because I was nervous and partially because my run-through the night before had been a few minutes over time. I ended up 5 minutes under time, though, so I'll definitely remember to slow it down next time.
I've gotten only incredible feedback, including from Ross Tuck, who's a brilliant speaker, and so I'm overjoyed at how it went.
Laravel is a bit of a dark horse in a lot of PHP circles. Some of my friends have told me that Laravel folks are over-sensitive to this, but my experiences at Laracon Eu, and on social media during and after, have actually strengthened this perception for me.
It's a joy to see how many of the speakers were not Laravel users. It was funny to watch them share "I've never used Laravel before" and expect to be stoned, and instead just receive a few chuckles. I had great conversations with Rafael Dohms about some of the reasons folks perceive Laravel like they do--including the fact that there's a big difference between the mass of people using Laravel vs. the attitudes and relationships within the community's "regulars", if you will.
Honestly, I think that the majority of Laravel-related drama comes from the usual Internet Problem: Forgetting that the people on the other side of your tweet, blog post, or angry Pull Request are real people, with real motivations and real insecurities. It's encouraging to see tweets like this from Raf: "[T]heir community feel a lot different from the inside." When you see the Laravel community as a large and diverse community of developers at various stages in their growth, rather than a monolothic giant where every opinion constrains to the Lowest Common Denominator, it's a lot easier to understand why the community has grown as it has.
If you follow me on Twitter or at my other blog, you know diversity, multiethnicity, and justice are really important to me. So, I was very glad to see not one but two talks about diversity and openness: Coding Like a Girl and The Code Manifesto: Empowering Our Community. I was less glad to see a few of the responses to their talks, but thankfully the primary response was openness, supportiveness, and curiosity.
Over the last year I've emailed a few organizations that are targeted at helping women and people of color learn to code. I said, "I'm interested in partnering with you to help create a hiring pipeline--I love what you're doing and I have connections to folks who are hiring supervisors at their (tech) companies." I sadly wasn't able to get in touch with anyone, and asked a group of people about ideas for how to address this.
The speakers from the two talks I linked above heard this and decided to act on it, so Gabi and Kayla created WeDiversifi, with the goal of it becoming a portal for hiring supervisors who want to be a part of increasing diversity in the tech workforce. I'm very interested to see where they go with this, and to hear any other voices on possible next steps for hiring supervisors to be a part of encouraging historically underrepresented groups to thrive in the tech community.
Shawn and company threw a great conference with Laracon Eu, and I'd gladly attend again. It was a joy to meet so many people, to speak and to hear talks, to receive such helpful feedback on my talk, and to enjoy hanging out with friends old and new.
I would love to think more deeply and journal longer about my experiences at the conference, but a week away from work doesn't leave a lot of free time upon return. My overburdened inbox beckons...
]]>[https://speakerdeck.com/mattstauffer/sharing-laravel-bringing-laravels-best-assets-to-any-project](Sharing Laravel: Bringing Laravel's Best Assets to Any Project)
]]>The point of this project is to show developers, especially those working with legacy projects in frameworks like CodeIgniter, how they can use Laravel components (which are called Illuminate components) in their projects.
I want to have a simplest-use-case example for each component, with as few dependencies and as little bootstrapping as possible, but also I'd be happy to have more complicated examples and even hopefully a standard bootstrap to get a Laravel-style Application instance and Service Provider structure. (For an already-running Github project that is similar to this bootstrap I'm describing, check out Jeremy Vaught's CodeIgniter Service Level, which I haven't yet had the chance to work with.)
So, please check out the repo (mattstauffer/IlluminateNonLaravel) and watch it for the future, or if you have some experience working with this sort of stuff, please send a Pull Request to share some code.
]]>I recorded a quick 5-minute video as an intro to squashing with git.
NOTE: All of the editors that pop up will use your system-wide default editor. I use Vim, but you can set it to anything you'd like.
Basically, when you're ready to squash some commits, just figure out how many commits back you'd like to include in your rebase-ing session. Let's say it's 24 commits. Now run this from your project directory:
$ git rebase -i HEAD~24
Now you're in interactive rebase mode. Change "pick" to "squash" for any lines that you want to merge into the commit above them, and then follow the prompts to set the commit messages for the new commits.
Check out the video for more details and examples.
]]>Note: If you're actually going to be doing something like I did in the video, it'll be a lot easier to get a commit hash from git log rather than counting down 42 commits. Just copy the hash--the gibberish at the beginning of each log line--and use it like so:
git rebase -i 01j93091
I wanted to clarify a little bit, and hope this could add a little to the conversation around well-architected, modular CSS: At Tighten we do use BEM... but we also use OOCSS. And SMACSS. At the same time.
I'd love to share why and how.
OOCSS is a programming paradigm. OOCSS stands for Object Oriented CSS, so it's best understood in the context of Object Oriented programming: classic (spaghetti) CSS vs. OOCSS is a bit like procedural (spaghetti) backend code vs. Object-Oriented backend code.
OOCSS focuses on flexible, modular, swappable components that do One Thing Well. OOCSS focuses on the single responsibility principle, separation of concerns, and much more of the foundational concepts of Object Oriented Programming.
For a great introduction to OOCSS, this post on the OOCSS Media Object (written by the/one of the people behind OOCSS) shows an example of what a CSS object looks like, and some of the benefits of using one.
Here's a sample, from that post, of an OOCSS object:
.media {}
.media .img {}
.media .img img {}
.media .imgExt {}
.bd {}
As you can see, .media
is an object, and the goal is to make that object independent of its surroundings so that it can be placed anywhere in your site.
SMACSS stands for Scalable and Modular Architecture for CSS. It's a book and a methodology for writing CSS (created by Jonathan Snook), but its most significant and influential aspect is its organizational system, which is designed to provide a set of buckets into which CSS should be organized. To learn more, check out the SMACSS web site and read or order the book there.
BEM is a specific concrete application of OOCSS. BEM stands for Block Element Modifier, and it describes the pattern of each CSS object's class name. We use a modified form of BEM, described best by CSS Wizardry's post titled MindBEMding.
Essentially, each BEM class starts with a block, which is an object name. Let's start with .byline
. Then, for children of that block, you add an element, separating it with two underscores: .byline__name
. Finally, you can modify any class (block or element) by adding a modifier, separated with two hyphens: .byline--expanded
.
.byline {}
.byline__name {}
.byline__title {}
.byline__picture {}
.byline--expanded {}
.byline--expanded__bio {}
Here's the OOCSS media object in BEM syntax:
.media {}
.media__img {}
.media__img--rev {}
.media__body {}
This post isn't the best place to describe the merits of BEM, but a few quick benefits: modularity, a shallow selector structure, and a much decreased likeliness of class name overlap are some of the biggest benefits of using BEM.
Since OOCSS is an abstract coding methodology, BEM is a concrete application of OOCSS, and SMACSS is an OOCSS-focused organizational structure, they actually play together very nicely--especially when you throw Sass into the mix.
Each of our applications have a core style.scss
file, which includes several partials. We use a SMACSS-inspired organizational structure, so we'll usually end up with a few basic files:
Core file: imports the others.
Includes normalize.css
, and also sets styles on base elements: html
, body
, a
, ul
, li
, etc.
Depending on the complexity of the site, we will likely have a file dedicated to layout. Grids, responsive frameworks, wrappers, etc. all live here.
Includes definitions for our modules, or objects. The goal is for as much code to exist in here as possible, making it flexible and reusable. This file will just be a list of modules defined (and documented) one after another.
The name for this partial varies, but essentially this is all the code that doesn't fit in _base
, _layout
, or _modules
. Code we just couldn't make modular; glue between modules; top level layouts; etc.
Also from CSS Wizardry (see CSS Wizardry's post on shame.css), a _shame
file is something we've been trying out only recently. This file is a place where you put all the code you're not proud of, with the intention of A) isolating it and B) fixing it later. The goal is for this file to be empty, but some times you just have to throw that hack in there to get it working.
We may also add a _javascript.scss
if we're not using Gulp or Grunt to concatenate the styles for our Javascript plugins.
In the past, a lot of the benefits of nesting with Sass was lost when you switched to BEM:
/* Sass pre-BEM: */
.object {
color: red;
.descendant {
color: black;
}
}
/* Generates:
.object {
color: red;
}
.object .descendant {
color: black;
}
*/
/* Sass with BEM: */
.object {
color: red;
}
.object__descendant {
color: black;
}
/* Generates:
.object {
color: red;
}
.object__descendant {
color: black
}
*/
But with Sass 3.3, we can finally get Sass nesting with BEM modules using the & to prefix our elements or modifiers:
/* Sass 3.3+ w/BEM: */
.object {
color: red;
&__descendant {
color: black;
}
}
/* Generates:
.object {
color: red;
}
.object__descendant {
color: black
}
*/
As you can see, OOCSS, SMACSS, and BEM can play together nicely. And, as a result, we start to see a lot of the benefits of Object Orientation come to fruition in even our frontend code. It's a beautiful thing.
Do you have tips, tricks, or corrections? Let me know on Twitter at @stauffermatt.
]]>I asked a question about this on the Craft StackExchange, and I got some great answers. I came away thinking there's no perfect solution, so I built two new solutions, and want to present them together with a few great suggestions I got in that thread. Please note that I don't think any of these solutions are perfect for every setting, but instead that they're all tools in the belt of a Craft developer.
In order of ease-of-use, the solutions I've found for syncing assets from a remote Craft site to a local one are the following:
Note that, just because rsync via Gulp/Grunt may be harder to implement if you're not running Gulp or Grunt already, if you are already running Gulp or Grunt, you'll absolutely want to consider a solution that fits in with a tool you're already using. I personally use Gulp on all of my sites, and will strongly consider using Rsync via Gulp if I can get it to be as flexible as syncCraft. However, I'll be using syncCraft for the moment, since it also syncs my database.
DownloadAssets is a Craft plugin that adds a dashboard widget that makes it simple to download a Zip archive of all of your assets, either by source or for the entire site. Note that it only downloads the assets from Local sources (not S3, etc.).
SyncCraft is a simple shell script that allows you to download and import your Craft database and sync down only new assets into your local asset directory. The initial configuration can be a little bit of work, but once it's set up, syncing down your remote data and files is a snap.
Dave Coggins wrote this fantastic answer on StackOverflow on how to use Rsync via Gulp:
Another alternative to grunt is to use http://gulpjs.com/. This is what I use for minifying css js and assets etc. I've been meaning to setup a way of syncing folders so I've put together gulp task to do it. I have roughly tested it but you might want to look over the code before you use it on a production site :) Be aware that it is setup to sync the folder so it will remove any local files that are not present on your staging/production server.
To use gulp you need to have node.js installed with npm. First install gulp globally:
$ npm install -g gulp
You might need to run that as sudo.
Next, in the root of your craft project create a gulpfile.js that looks something like this:
// Gulp
var gulp = require('gulp');
// Plugins
var rsync = require("rsyncwrapper").rsync;
// Pull down assets and sync local folder
gulp.task('synclocal', function(){
rsync({
src: "username@hostname.com:/path/to/assets",
dest: "assets",
ssh: true,
recursive: true,
syncDest: true,
compareMode: "checksum"
},function (error,stdout,stderr,cmd) {
if ( error ) {
// failed
console.log(error.message);
} else {
// success
console.log("folder synced!")
}
});
});
Finally we need to make sure rsyncwrapper is install. You can do this by running:
$ npm install rsyncwrapper
You should now be able to run the task by typing:
$ gulp synclocal
Marion Newlevant shared a similar tip, but for Rsync with Grunt instead of Gulp.
Capistrano is one of several tools built for automating common tasks on multiple servers. It's extremely powerful and built for managing this type of workflow, but it's also a lot of work to learn. However, if you're looking for something with a lot of power and flexibility, one of these tools will certainly be your best option.
I hope this writeup will help you get in a better place for keeping your assets in sync between your Craft installs. Do you have suggestions, corrections, or requests for my scripts? Let me know on Twitter at @stauffermatt.
]]>Note: Forge now supports AWS out of the box, but much of this tutorial still applies for other non-native VPSes.
Laravel Forge originally had support for Rackspace and AWS (Amazon Web Services), but for various reasons it now supports three options: DigitalOcean, Linode, and "Custom VPS."
Today we're going to get a rudimentary single-instance application running on AWS, managed by Forge, using the "Custom VPS" option. This post assumes little-to-no experience with AWS, but does assume general competency with managing servers, and experience with Forge.
Visit http://aws.amazon.com/console and choose Sign Up. Have a credit card and a phone ready to verify your identity and to add payment information.
Even though you're entering your credit information, what we're setting up today will keep you on the Free tier, so you don't have to worry about being charged immediately.
Once you're signed up and have verified your account, visit the AWS console. Click on the "EC2" button to take you to the management console for EC2, or "Elastic Compute Cloud"--Amazon's service for creating and managing Virtual Machines.
From here, click "Launch Instance".
This allows you to specify which Machine Image--that is, which pre-created recipe for a Virtual Machine--you'd like to base this intance off of. For Forge, you'll want to use Ubuntu Server 14.04 LTS (HVM), so Select that one.
For this demo, we'll go for the lowest power, free option: t2.micro.
Rather than Launching now, let's walk piece by piece through the configuration process.
You can now configure all the specific configuration details for this instance.
The defaults here are fine for a demo, although if you plan to rely on this simple server for anything you'll probably want to check Enable Termination Protection so the server will reboot if anything happens to shut it down. When you're done, move to the next screen.
We can configure the amount and type of storage our instance will have available. The default is an 8GiB SSD drive, so let's just keep that as-is and move on.
AWS allows you to tag each instance with up to 10 key/value pairs. This can be useful if you want to sort or add permissions to instances later (using IAM roles) based on client (Client=Bob), environment (Environment=Staging), management service (Managed-By=Forge), or more.
If this is confusing, feel free to just skip it. I just added Managed-By=Forge.
Security Groups allow you to associate multiple instances together with a single set of security permissions. Security Groups allow you to both apply the same settings to multiple instances, and create an instant firewall surrounding just the members of that group--one of the primary permissions options for rules is "only members of this security group."
Therefore, you'll want to create a new security group for each project you're working on.
Depending on your needs, you'll want to add an entry for each. I added SSH, HTTP, MySQL, and you could also add IMAP/SMTP/POP3 if you need mail. You'll see the dropdown contains many other options for adding security access rules.
For now, add:
SECURITY CAUTION FROM AMAZON: If you use 0.0.0.0/0 ("Anywhere") for SSH, you enable all IP addresses to access your instance using SSH. This is acceptable for a short time in a test environment, but it's unsafe for production environments. In production, you'll authorize only a specific IP address or range of addresses to access your instance.
NOTE: I'm not entirely certain of what the correct settings are for SSH to allow Forge to connect. For now, the only way I know is to open SSH access from "Anywhere", but I've messaged Taylor to see if there's a better configuration that both allows Forge access, but locks down your SSH access a bit.
If, later, you're creating a multiple-instance application stack, you will be able to set a Custom IP of "this security group" and allow any instances within this server to talk to each other.
You might be familiar with SSH key authentication. AWS uses the .pem format, which is yet another way for you to download pieces of a security certificate for authenticating with other machines, etc. If you're not familiar, this file you're downloading will allow you to authenticate yourself to AWS without needing to type a password every time.
This is another great chance to have a specific key per project; but it's entirely up to you. You could also choose to have one key for all of your Forge accounts, one key for the entirety of your Amazon account, or whatever else. You'll see in the screenshots I created one for laravel-forge, but again, I would likely do this project-specific in the future.
Download the file, and place it in a location you'll remember. I created a pem
directory in ~/.ssh
and placed it there (~/.ssh/pem/laravel-forge.pem
).
Finally, it's time! Review everything you have set, and once you're satisfied, Launch the instance.
Note: you can optionally click "Creating Billing Alerts" to set up notices for when you get billed over a certain amount.
Wait a bit for it to get up and running, and then go back to View Instances and check the instance. Now, down at the bottom of the screen, you'll be able to view all the important information about this instance.
Log into Forge, and add a new site with the "Custom VPS" tab. Fill in all the fields with the values from the "View Instance" screen on the AWS Console (the screenshot above).
Now click "Create Server". You'll get a popup with a code snippet:
Copy the code from there, paste it somewhere temporary, and adjust where it says bash forge.sh
to instead say sudo bash forge.sh
--without sudo
, you won't have the permissions to run it on your AWS instance.
Note: Running
sudo
on a script you're downloading from the Internet can be dangerous. But I trust Taylor, and this is an https connection, so I think we're safe. If you know otherwise, please let me know.
Open up your local terminal. Before we SSH using the .pem file we downloaded earlier, we'll need to set its permissions appropriately. From the command line, chmod 400
the file:
$ chmod 400 ~/.ssh/pem/laravel-forge.pem
Now, ssh in using the following format:
$ ssh -i ~/path-to-pem-file ubuntu@instance-public-dns
For example, based on my configuration, you'll get:
$ ssh -i ~/.ssh/pem/laravel-forge.pem ubuntu@ec2-54-191-246-246.us-west-2.compute.amazonaws.com
Now you're SSH'ed into your new instance! Finally, run the command from Taylor. The first half downloads a shell script named forge.sh
and the second half (remember, sudo bash forge.sh
) runs it.
You should see a ton of notices scrolling by, and eventually see the server reboot (and kick you off of SSH).
If you see this, it means Forge has successfully linked with your instance, and you should see a Forge provisioning email show up in your email inbox any minute!
Now, just like with any site on Forge, head over to your DNS and add an A record pointing to your new Public IP:
forge
instead of as ubuntu
. I recommend adding an entry to your ssh config file to make it easier: Then you can just ssh with $ ssh aws-demo
That's it! You now have a fully-functional, single-instance application running on AWS, with a MySQL server, managed by Forge, with a domain pointing at it. There's a lot more to do from here--especially if you want to really take hold of the opportunities AWS makes available to you--but you've got the basics now!
Questions? Concerns? Did I royally screw something up (I'm new at this AWS thing, so please do let me know)? Hit me up on Twitter. Otherwise, enjoy!
]]>To address this need--regularly running the same, pre-written script across one or many servers--Forge has the concept of a Recipe. Let's try one out.
Visit Forge Recipes and find a recipe you like. We'll be using artisangoose's recipe "Install ElasticSearch" for our demo. Select all of the content of the recipe and copy it to your clipboard.
Go to the Recipes page on Forge, and add the recipe.
You'll now see your recipe at the bottom of this page in the "Your Recipes" section.
Click the green "play" button to show a popup of all of your servers; check the servers you want to run the script on, press the "Run" button, and that's it!
Once the recipe is finished running, Forge will email you the results.
Now you can create your own arsenal of readymade recipes and run them at your will on your server(s). Once the collection of recipes at ForgeRecipes really grows, there's a ton of potential for these recipes to be the means by which we install and update features that Forge doesn't manage out of the box.
That's it! Enjoy!
]]>But your local environment and database are not particularly in sync with your production environment. Thankfully, Heroku provides the ability to run their "buildpacks" locally to ensure a local environment that's in sync with your remote environment.
Note: I wrote a blog post to get this running on a local Apache install, but the number of steps and configuration issues it required was out of reach for a simple tutorial. The folks at Heroku have shared that they're in the middle of making changes to the Apache configuration for the buildpack, so I expect it'll get easier soon. For now, let's roll with Nginx.
Open up your app's composer.json and add the following to the end of it (or just add the buildpack to your require-dev section, if you already have one):
"require-dev": {
"heroku/heroku-buildpack-php": "dev-master"
}
Run composer update
, and you'll now have the Heroku PHP Buildpack installed locally. Now, create a file in your project root directory named .env
and place the following code into it:
CLEARDB_DATABASE_URL=mysql://root:123abc@127.0.0.1/my_laravel_heroku_database_name
This file is a configuration file for the Buildpack, setting an environment var named CLEARDB_DATABASE_URL
and setting its value to mysql://root:123abc@127.0.0.1/my_laravel_heroku_database_name
. This version we created is just for local testing, so add it to your .gitignore
.
Note that you'll need to update the username (root
), password (123abc
), and database name (my_laravel_heroku_database_name
) for your local environment. Heroku's local buildpack won't be serving MySQL for you, so you'll need MySQL running.
NOTE: If you don't have a command-line mysql accessible and working, Mac/Homebrew users can
brew install mysql
and then follow the directions to have launchd start mysql at login. I believe the default username isroot
and the default password is blank.
Finally, run foreman start
(unfamiliar with Foreman? Check out my blog post introducing Procfiles) to get everything up and running.
Note: If you get the following response:
This program requires PHP 5.5.11 or newer; check your 'php' command.
, it means your local version of PHP is not up to date with what Heroku is expecting. Runphp -v
on the command line to find what version you're running. Hopefully you're on a Mac using Homebrew, because if you are it's a relatively painless fix: runbrew update
, thenbrew install --with-fpm php55
and then `brew install php55-mcrypt'. Follow the instructions that are output after you run the installer and you should have PHP 5.5 up and running shortly.
You'll now have a CLEARDB_DATABASE_URL env var available for use in your local database.php just like we did in the production database.php (but note we've added a bit to the code to allow for null passwords locally). The benefit of using the .env
file like is this it that we can use the same database.php
on dev and prod, and just rely on the .env
file to change up the database credentials:
$url = parse_url(getenv("CLEARDB_DATABASE_URL"));
$host = $url["host"];
$username = $url["user"];
$password = array_key_exists('pass', $url) ? $url["pass"] : '';
$database = substr($url["path"], 1);
return array(
'mysql' => array(
'driver' => 'mysql',
'host' => $host,
'database' => $database,
'username' => $username,
'password' => $password,
'charset' => 'utf8',
'collation' => 'utf8_unicode_ci',
'prefix' => '',
)
);
With that, you have a development-ready local environment that mimicks the Heroku PHP buildpack. Just visit localhost:5000 in your browser and you're good to go!
]]>tighten.slack.com
, tighten.harvestapp.com
, etc.
In order to implement a feature like this on your own Forge-hosted site, you'll need to be able to create a Site that accepts not just one domain but many. Thankfully, this is easy with Wildcard Subdomains.
When you add a new site, notice that there's a checkbox at the bottom of the form that says "Allow Wildcard Sub-Domains." If you check this box, any subdomain under the domain you enter in the "Root Domain" will be passed to this Site.
Well, imagine you're setting yourself up a Harvest competitor named Schmarvest. You want to set up Schmarvestapp.com
so that every subdomain underneath it is reserved for a different user, but you want all of those thousands of subdomains to all be served from one single Laravel app.
If you just create a new site in Forge for Schmarvestapp.com
, and don't use the Wildcard Sub-domains checkbox, you'll see your primary domain (schmarvestapp.com
) will work just fine. But if you try to visit one of your subdomains, nothing will happen. tighten.schmarvestapp.com
won't go anywhere. You could, if you wanted, add a new Site for each of those subdomains, but they'd each be hosted on a completely separate codebase.
Instead, add a Site for schmarvestapp.com
that has the "Allow Wildcard Sub-Domains" option checked. Now if you visit any subdomain under your domain (and if your DNS is pointed to Forge correctly), you'll still always hit your one install.
So, all of this has been assuming you have your domain routing set correctly at your DNS. But what is the appropriate routing?
The most appropriate option is a wildcard A record. So, in your DNS provider's editor, add a new "A" record for your domain.
Set the "Name" field to be *
(or *.yourdomain.com
, depending on the prompts your registrar gives you for format). Set the "Address" field to that of your Forge Server's IP address. And do whatever you'd like with your TTL.
That's it! Wait for an hour or so and now all requests to this domain and its subdomains will head over to your Forge server.
That's it! Time for Schmarvest world domaination.
]]>Today I needed to get an old CodeIgniter site online somewhere, password-protected, to train someone on. So, I spun up a new site on one of my Forge servers, uploaded the code, ran the migrations, and was ready to go--until I realized I didn't know how to password protect a folder in Nginx.
Thankfully, it's almost easier than it is in Apache.
Just like when you're password protecting a folder in Apache, you'll need to generate a htpasswd
file. You can create the htpasswd file locally or use DynamicDrive's web-based htpasswd generator. Either way, you should end up with a small text file that convention suggests you name .htpasswd
.
Upload this file to your remote server. I like to put it in the web root for my site and then git ignore it (and, of course, if you're serving the site from the web root you'll want to move it above that level). So my file will live in /home/forge/my-site-name-here/.htpasswd
.
Now log into Forge. Edit the site you want to password protect. Click the edit icon in the lower right hand corner of the site edit panel, and a box to edit your Nginx config file will pop up.
If you want to password protect the entire site, just append a few lines to the end of the location /
block, taking it from this:
location / {
try_files $uri $uri/ /index.php?$query_string;
}
to this:
location / {
try_files $uri $uri/ /index.php?$query_string;
auth_basic "Restricted Area";
auth_basic_user_file /home/forge/my-site-name-here/.htpasswd;
}
If you want to just password protect a particular folder, create a new location {}
block and place the auth instructions there. Learn more about location blocks at the Nginx docs.
Now save the Nginx config in Forge. Forge will auto-apply the config, so you won't need to restart Nginx manually.
Next time you visit this site, you'll see the (probably) familiar HTTP authentication popup. All done!
]]>Every time I opened up my terminal to work on this site, I'd open up three tabs. One for gulp, one for the server, and one for general folder management: git
, ls
, cp
, etc. And I'd have to re-create this tab layout every time.
(Note: I know one of the solutions for this would be to start using tmux, and the moment I do I'll write it up.)
But there's a simple solution for this: Foreman. Foreman is a tool that allows you to declare the processes that are necessary in order to run your app in a file called a Procfile
. Then just run Foreman and it'll start up all of your processes together, at once, with a nice color-coded output.
Assuming you have Ruby running on your system, simply run gem install foreman
from anywhere on your commandline and you're good to go.
A Procfile is comprised of lines that follow this syntax:
process_nickname: shell_command_to_run_process
So in order to create a process nicknamed "gulp" which runs gulp watch
, I'd add the following line to my Procfile:
gulp: gulp watch
That's it! You can run as many of these as you want at a time.
So, here's my Procfile for Ionic:
gulp: gulp watch
serve: ionic serve
Now that I have a working Procfile (named Procfile
) in the root of my project, I simply run foreman start
and I can see the processes start up and echo out their notices. I can quit out at any point with CTRL-C.
Personally, I like to run this from my IDE's built-in Terminal, and then if I use my IDE's folder/file management and VCS (Git) tools, I can completely stay out of the Terminal while I develop this app. Less alt-tabbing means more focus, which I can't complain about!
That's it! Go forth and Procfile!
Questions? Favorite procfiles you want to share? Hit me up on Twitter.
]]>Christopher Pitt has written a fantastic article about adding an SSL cert in Forge, so I'm not going to duplicate that work.
Check out his post Forge + SSL.
]]>We've already covered queues and queue workers, and we'll talk about daemons soon. But cron jobs are the simplest way to keep things moving in the background, rather than relying on user requests to perform all of your app's logic and heavy lifting. Let's set up our first cron job on Forge.
Honestly, there's not much to explain here.
That's the best part about cron jobs on Forge: You no longer have to memorize the order of the time slots on the cron format. You no longer have to fight hosts who don't offer crons. Click the "Scheduler" tab on your Forge server, choose the path and frequency, hit Schedule Job, and you're done.
The default path is php /home/forge/default/artisan scheduled:run
, which shows you that the simplest use of cron in Laravel is to trigger Artisan commands.
We have an internal app that consumes the Harvest API. But it's painfully slow when we allow the users' visits to trigger the API sync. So now, we have this running every hour in the background:
php /home/forge/sauce/artisan harvest:sync
Note that you're not limited to Artisan commands. Run MySQL backups, run custom shell scripts, copy files around, or whatever else you'd prefer. If you can run it on the command line, you can run it here.
Determine the system user who's running this command. Keep it at forge
unless you know what you're doing. :)
This does what it says on the tin. Note that the custom schedule option allows you to use the familiar asterisk-style scheduler.
A quick reminder of what that means:
From left to right, Minutes Hours Day-of-Month Month Day-of-Week - First asterisk = Minutes: 0-59 - Second asterisk = Hours: 0-23 - Third asterisk = Day of Month: 1 - 31 - Fourth asterisk = Month: 1 - 12 - Fifth asterisk = Day of Week: 0 - 6 (0 is Sunday, 6 is Saturday)
*
means "every". So* * * * *
means "every minute of every hour of every day of every month".
For any cron jobs that are running, you can view logs for the output of your cron jobs, so you can diagnose any errors (or just see that they're working).
That's it! Go forth and schedule!
]]>Package managers are the best way to simplify the installation process, because they require minimal work from the user's perspective and provide greater flexibility, consistency, and automation of the install process than installing apps manually.
If you're working with Ruby--even if it's just for Sass and Compass--you absolutely want to use a version manager. The old standby, which I'm most familiar with, is RVM, but of you're familiar with Rbenv that'll do fine as well. Either way, a Ruby version manager will make it easy to install, manage, and switch between multiple versions of Ruby on the same system.
Install RVM:
$ \curl -sSL https://get.rvm.io | bash -s stable --ruby
Now you have convenient command-line methods to install, manage, and switch to Ruby versions easily.
Learn more about how to handle your Ruby versions at RVM's site: RVM basics
If you're working with Node--even if it's just for Gulp or Grunt--you already know that you need Node & NPM. You technically can install these with package managers like Homebrew, but I've heard that NPM and Homebrew often battle each other and that it's worth installing it separately.
Thankfully, Node's installer packages are extremely easy to use: Install Node (Note: there is a Node Version Manager, if you need to be able to swap out versions of Node. If you're a NodeJS developer, you'll probably want to look there.)
Homebrew is a package manager like Composer, NPM, or RubyGems, but the apps it installs are OS X system-wide command-line (and, with the addition of cask, system-wide GUI) apps for the Mac. Almost any app you've ever installed and used from the Mac command line--php, mysql, optipng, etc.--can likely be installed and managed via Homebrew.
NOTE: Before you install Homebrew, you'll need the latest version of the Xcode command line tools. Thankfully, recent versions of OS X have started auto-prompting you to download it when you try to do anything relatively complex from the command line, so I'm not going to attempt to provide instructions for that here.
You can visit the Homebrew home page to learn more, or just run the following command to install Homebrew:
$ ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
To install a package with Homebrew, you can do one of two things. The classic way of installing a package using Homebrew is to just pass the package (recipe) name as a parameter to the command line app:
$ brew install package_name_here
However, it's still pretty awkward and manual to paste in a massive list of packages to install using this method. So, Homebrew folks created a Brewfile, which is like a Gemfile (or composer.json, or package.json).
NOTE: The syntax and tooling for Brewfiles have changed. I hope to update this post soon, but for now look here: github/homebrew-bundle
A Brewfile is just a text file listing out all of the commands you'd like Homebrew to run in order, one command for each line.
Note: if you ever see
brew tap repo_name_here
, it's helpful to know that thetap
command allows you to add Github repositories to your list of installable Homebrew packages. This way you can install packages that aren't a part of the core Homebrew list of available recipes.
Of course, since we're talking about automating here, I prefer the Brewfile option. Thoughtbot has written a great article entitled Brewfile: a Gemfile, but for Homebrew that will give you all the details you need. We'll end up with a file looking something like this:
NOTE: This entire section is now a bit out-of-date. I hope to update it very soon (would love help if you want to translate it into modern brewfile-ese for me) but if you're following this, check out the GitHub README for instructions for how it works these days.
# Matt Stauffer's awesome Brewfile
tap josegonzalez/php
tap homebrew/dupes
install openssl
install php54-mcrypt
install josegonzalez/php/composer
install wget
install optipng
install redis
install mysql
install phpunit
install postgresql
As you can tell, Homebrew just runs each line as if you had run brew tap josegonzalez/php
, brew tap homebrew/dupes
, brew install openssl
, etc., one after another.
Now, once I've installed Homebrew, I can just navigate to the directory with my Brewfile and run brew bundle
. Done.
Up until recently, Homebrew only installed command-line programs. But the homebrew-cask project allows you to manage GUI apps--that is, apps with a graphical interface like Chrome and Skype--via Homebrew as well. Recipes that are for GUI apps are called casks.
First, let's install cask:
$ brew install caskroom/cask/brew-cask
Next, let's try our first install:
$ brew cask install google-chrome
And just like that, you have Google Chrome installed. No navigating to the web site, no downloads, no mounting installers... just run brew cask install app-name
and you're ready to go.
NOTE: In order to make your Casks install to sensible and predictable locations, I recommend adding the following line to your ~/.zshrc or ~/.bash_profile.
export HOMEBREW_CASK_OPTS="--appdir=/Applications"
But that can still take a while. If only we could have some sort of file that listed out all of our casks. Something like... a Caskfile. Yes! There's a Caskfile! Can our lives get any easier here?
There's actually no difference between a Caskfile and a Brewfile, which means we'll need to preface all of our lines with cask
--but, that means we can even install cask within our caskfile!
Check it out:
# Matt's awesome caskfile
# Install Cask
install caskroom/cask/brew-cask
# Install Casks
cask install alfred
cask install caffeine
cask install flux
cask install virtualbox
cask install vagrant
cask install google-chrome
cask install iterm2
cask install phpstorm
cask install sequel-pro
cask install macvim
cask install adium
cask install nvalt
cask install rdio
cask install slack
cask install textexpander
cask install vlc
cask install the-unarchiver
If you want to run brew bundle
on a file that's not named Brewfile
, just append the file to the end of the command:
$ brew bundle Caskfile
That's it! As you can tell, a Caskfile is really just a Brewfile that we decided to give a different name and a focused set of commands.
Now that I showed you how to create a separate Caskfile, I'm going to flip the script: You could just move your entire Caskfile up into your Brewfile above, run it, and call it a day. Single file installs everything.
# Matt Stauffer's awesome Brewfile With Casks
tap josegonzalez/php
tap homebrew/dupes
install openssl
install php54-mcrypt
install josegonzalez/php/composer
install wget
install optipng
install redis
install mysql
install phpunit
install postgresql
# Install Cask
install caskroom/cask/brew-cask
# Install Casks
cask install alfred
cask install caffeine
cask install flux
cask install virtualbox
cask install vagrant
cask install google-chrome
cask install iterm2
cask install phpstorm
cask install sequel-pro
cask install adium
cask install nvalt
cask install rdio
cask install slack
cask install textexpander
We now have a single Brewfile that will cover 75% of the apps you need to operate your computer, all with one simple install.
Do you have any favorite casks or essential Homebrew recipes that I missed? Shout out on Twitter.
In my next post I'll talk about creating and syncing dotfiles across your computers, and a Brewfile (or a Caskfile) would be a perfect candidate for that.
]]>If you're not familiar with the concept of dotfiles, check out Github's dotfiles page to learn more about them. Essentially, when someone says "dotfiles" they mean maintaining your command-line preferences in a Git repository (sort of like how I use Dropbox to manage my preference files for TextExpander, etc.) that you install on every computer.
The name dotfiles refers to the fact that most of the files that perform this sort of configuration start with a dot. The Zsh configuration file, for example, is .zshrc
. The SSH configuration folder is .ssh
. And on. So the concept of "dotfiles" just means "versioning your configuration files."
Your dotfiles will help you create powerful and consistent shell shortcuts and functions, settings for your editors, color coding and layouts for your shell, preferences and authentication for ssh and mysql and other protocols, and more.
I recently forked my old Bash dotfiles as a ZSH dotfiles project, but they're still a bit of a work in progress, so user beware.
NOTE: There are a lot of dotfiles floating around on the Internet. Zack Holman has a well-known collection, as do Paul Irish and Matthias Bynens (all three of these are very Internet-trustworthy, for what that's worth). Be careful installing dotfiles from a source you don't trust. Blindly running a dotfiles installer, or even just adding particular config files to your machine, can add some un-safe settings if you don't understand what you're doing.
.zhsrc
, .bashrc
, .bash_profile
, .bash_prompt
, etc: These are the configuration files for your shell. From here you can set up aliases (shortcuts), functions, environment variables, and even include other configuration files..curlrc
, .gvimrc
, .vimrc
, .wgetrc
, etc.: These are configuration files for particular command-line programs. You might be setting font information, default connection information, and more..gitattributes
, .gitconfig
: Global git configuration.screenrc
, .inputrc
, .hushlogin
: These are files configuring specific aspects of your connection to the shell and/or the terminal.Depending on the dotfiles repository you're using (or if you're just managing it on your own), there are many different options for configuring and managing your dotfiles' installation process.
Any dotfiles repo will expect you to clone the dotfiles repo to your local computer (I do mine at ~/.dotfiles) and then use any method to copy or link those files down into your root directory.
Paul Irish/Mathias Bynens' dotfiles (which mine are branched from) use a script that copies the dotfiles from your dotfiles repo. I actually think this isn't a great idea, because it makes making and syncing changes with the repo a pain. I much prefer Zack Holman's method, which symlinks the files instead, allowing you to git pull and watch your dotfiles instantly update.
You can also manually copy files from your dotfiles directory to your root, or you can use one of several dotfiles maintenance apps--I don't have any experience with them, but a friend whom I trust recommended rcm, so I'll probably be trying that out soon, too.
No matter where you got your dotfiles from, they're not unchangeable rules. They're just suggestions. I suggest that if you get someone else's dotfiles repo, you start by reading through every file and understand what they're doing--and disabling anything you don't like.
If you want to make a change later, just go to your home directory and edit those files. If you want to save your changes, and you're working from someone else's repo, fork that repo and clone the forked version instead. That way you can make your changes, and (if you're symlinked, simply, and if you're not symlinked, with a wee bit of work) push your changes back up to your repo. Want to learn more? Check out Zack Holman's excellent post Dotfiles Are Meant to Be Forked.
After talking up Zach Holman's dotfiles so much, I hope you'll consider checking them out. But--they weren't around when I really started digging into dotfiles, so I have my own, somewhat less modern set of dotfiles. I did update them a bit this week, but they're still based on copying instead of symlinking, etc. Check both out (Zack's | Mine) and pick those which you think are best. FYI, my next free weekend will be trying out Zack's, so mine might be on their way to the graveyard soon.
Another set of configurations that I don't want available through the public repo is my SSH config, where I store shortcuts, SSH usernames and URLs, and more. But, I do want it to sync across my devices. So I set it up in my Dropbox folder and then symlink that file into my ssh folder. For example:
$ touch ~/Dropbox/.ssh-config
$ ln -s ~/Dropbox/.ssh-config ~/.ssh/config
Now we have an ssh config file that lives in our dropbox directory and will be synced across all of our machines every time we make a change -- without relying on putting our SSH information publicly on Github.
Here's a snippet of what an SSH config file might look like:
# My Awesome Web Site
Host awesome
Hostname 141.141.141.141
User me_duh
IdentityFile=/Users/me/.ssh/id_for_awesome_site.rsa
# My Other Awesome Web Site
Host other
Hostname ps12345.awesomehost.com
User me_still
Now, I can just type the following and I'll be instantly SSH'ed in (after typing the password, if I haven't set up SSH key authentication):
$ ssh awesome
Done. No more remembering ip addresses, fumbling with command line switches for multiple SSH IDs, or even remembering your ssh usernames. There are many more features you can manage via your SSH config file, if you want--ports, tunneling, and more. Check out the ssh_config docs for more information.
If you're not familiar with SSH Key authentication, you might be confused at how you could possibly authenticate with those web sites above without a password. Well, you could type your password every time. But there's a much faster way, if you're willing to do some leg work up front.
I don't have the space here to give a full intro, but here's a quick Github Guide to generating an SSH Key on your new machine. Or, if you're a pro:
$ ssh-keygen -t rsa -C 'your@email.com'
[ follow prompts ]
$ pbcopy < ~/.ssh/id_rsa.pub
Now you have the public version of the key copied to your clipboard, ready to add to Github, Forge, or even upload to your remote site. Again, there's not enough space to do a full intro here to remote SSH key authentication, but here's the short walkthrough to the super-hacky version if you're already familiar:
$ pbcopy < ~/.ssh/id_rsa.pub
$ ssh my_username@myhost.host
[ type password ]
$ cd ~/.ssh
$ vim authorized_keys
[ paste your key at the end of the file]
[ save file and quit ]
If you're not familiar with it, though, please don't just follow the above instructions. They're very hacky, and assume you know what to do if they files and folders aren't there, you know how to use Vim/emacs/pico, and a lot more. If you're new to that world, I suggest you check out other tutorials online to get a better grasp on that. I've never found a tutorial I'm 100% satisfied with, but here are a few: (Slicehost nixCraft TutsPlus)
There's a utility named ssh-copy-id
that aims to simplify this process, but I've had mixed results with it. You can also do it with scp
, but you need to know what you're doing so you don't potentially overwrite someone else's keys. Are you a guru on this? Hit me up on Twitter and I'll update this section with any tips you have to offer.
NOTE: If you're using Forge, you don't have to follow this step about
authorized_keys
. Justpbcopy
yourid_rsa.pub
and paste it into the SSH keys box in Forge. That's it!
I could write an entire blog series about all of the things that my and Zack's dotfiles do. Coloring your grep output, boosting your ls
commands, adding plugins to vim, optimizing your curl
and wget
, adding convenient functions and methods for your day-to-day work in the terminal... But I'll leave that to you, dear reader, to explore and discover when you read the dotfiles before you install them. Right? Right.
My hope is, even if I haven't explained every piece of every dotfile, you'll come away from this blog post feeling excited and ready to try out versioning your dotfiles and maybe learning a bit from others'.
Do you have any tips about dotfiles or SSH config that I missed here? Do you think I should add another blog post to this series, or add to the existing posts? Hit me up on Twitter and let me know. Thanks, and I hope you've enjoyed! Check out my next (and, for now, final) post showing a stripped-down walkthrough my of my install process, without explanations.
]]>I'd love to share some of these tips with you, and I'd also love to hear your tips from you on Twitter and add them to this post.
The first thing I do when I'm setting up a new machine is get my core files in. There are many ways to sync these core preferences and configuration files--storing them on a thumb drive, in your email, Github, etc.--but I use Dropbox to sync them. So, before I do anything else, I start the Dropbox syncing process. 1password, Alfred, zsh, textexpander, and many of my other apps rely on configuration files that I sync across my machines with Dropbox, so Dropbox sync is vital.
Now that we have our core files synced, we're ready to start customizing our terminal.
Before I do any work in the terminal, I'll be sure to install iTerm2. I actually install it using Homebrew Cask, but we won't get to that until the next blog post, so for now just download iTerm2 and install it.
iTerm2 is free, highly used and tested, and adds a ton of custom functionality that you can add later if you'd like, and also handles some default functionality and styles better than OS X Terminal. So, even if you don't notice the difference, just start with iTerm2 now for the sake of future you.
You may not know this, but even when you open the terminal, you're interacting with one particular "shell"--the default on your Mac is called Bash. You can choose which shell you're most comfortable with, although the default for most people is going to be Bash.
For years I maintained a set of configuration files for exactly how I wanted my Bash configured, which I had stolen and cobbled from others. You can still see it at github.com/mattstauffer/dotfiles.
But about a year ago, I discovered that there's a shell called Zsh, and together with a plugin infrastructure called OhMyZSH, it had all of my desired features (and many more) right out of the box. I can't recommend Zsh+OhMyZSH enough.
Installing OhMyZSH is extremely easy, and Zsh is already available on your system:
$ curl -L http://install.ohmyz.sh | sh
Run that, follow the prompts, edit your ~/.zshrc
file to set a theme and choose your plugins, and you're ready to go!
Your ~/.zshrc
file is the configuration file for Zsh. (What are RC files?)
When you first install OhMyZSH, it'll create a .zshrc
with some default settings and options that allow you to choose your OhMyZSH theme, plugins, and settings.
Edit the file in your favorite editor and change the default settings to whatever you'd prefer, and check back in a few posts to see my favorite defaults for my own .zshrc
.
At this point you have a nicely styled, turbo-boosted terminal and synced access to all of your core config files.
Next, we're going to talk about using Homebrew to install command-line and GUI apps, manage dependencies and Ruby versions with NPM and RVM, and use dotfiles to configure your terminal, your editors, and your SSH connections.
]]>There a plenty of use cases for this, but one common one is that it saves your users from waiting while the server peforms a complex operation like processing an uploaded image. With queues, the user's interaction pushes the "process image" task (along with any details about the particular image, the user's id, etc.) onto a queue. Then it releases the user back to whatever they were doing. Meanwhile, the queue worker is silently popping one more task off the top of the queue, acting on it, deleting it, and moving on to the next.
There are many different drivers for Laravel; the default is "sync", which just runs the code as if there were no queue. You can use queues on external services, like Iron.io and Amazon SQS. Or you can run your own queue locally, which is what Laravel Forge provides with beanstalkd. In addition, Forge makes it simple to start up a queue worker to run down any of your queues, whether locally on beanstalkd or remotely on Iron.io, and act on them.
I've written about this before, but using a Closure (often with environment variables) to detect your environment instead of an array of hostnames is a little bit of a second-class citizen in Laravel and even more so in Forge. By this, I mean this won't work in Forge:
$env = $app->detectEnvironment(function() {
return getenv('APP_ENV') ?: 'production';
});
Well, this is even more true with Queue Workers--so much that, unless someone shows me what I'm missing, I'm ready to say it is impossible (or at least wildly impractical) to use Laravel beanstalkd queues together with a detectEnvironment Closure. Not only does the detectEnvironment Closure not have the environment variables available to it, but when you're running commands from a queue, it ignores detectEnvironment entirely if you're using a Closure.
I could only get my push queues to work (including correctly detecting environment) if A) I switched detectEnvironment to use a hostnames array or B) was satisfied with the environment always being "production" (which is fallback response if you use a Closure for your detectEnvironment).
I would guess this is more of a bug or an oversight than an intentional design decision. Or, it's me just doing something wrong. I hope to dig through the source soon and try to wrap my brain around the Artisan bootstrap to understand it better. But for now: If you use a closure to detect your environment, and the environment name for your Forge server isn't "production," you'll have to hold off on this for now. (Am I wrong? Please let me know!)
Assuming we're OK with either A) detecting environment using hostname or B) defaulting to the environment name 'production' for all queues, we're ready to go.
First, you'll want to write a Job Handler, which is really any class with a 'fire' method (and you can even customize which method name gets called). At the end of the fire method, make sure to delete that job so it's removed from the qeueue. You can learn a lot more about Laravel queues at the docs.
namespace Company\Twitter;
class ProfilePuller
{
public function fire($job, $data)
{
// do something with $data['twitter_handle']
$job->delete();
}
}
Now that you have a job handler, you'll want to push a job up to your queue, referencing that class and passing some data, somewhere in your code--in a controller, for example:
Queue::push('Company/Twitter/ProfilePuller', [
'twitter_handle' => 'stauffermatt'
]);
At this point, if you run your code that triggers this queue it's going to work perfectly. Wait, is it that easy to get your queues set up?
Not quite. The Laravel default queue driver is 'sync', which means "run this code synchronously as if we weren't using queues at all." When the controller hits that Queue::push
line, it runs the code in your job handler just like it was inline code. But we want it to run asynchronously.
The next step is telling your app to use your beanstalkd queue instead of the 'sync' queue.
Find the config files for your present environment. For me it was app/config/forge/queue.php
(create this file and structure it like the default queue.php if it doesn't exist).
The queue config file has a parameter that's named 'default', which is set to 'sync'. If you've ever edited your app's database settings, this format will be very familiar. Change 'sync' to 'beanstalkd' and your queue pushes will now hit your Forge Beanstalkd queue.
return array(
'default' => 'beanstalkd'
);
There's a composer package that Laravel requires in order to interact with beanstalkd: pda/pheanstalk
. Add this to your composer.json and install it.
$ composer require pda/pheanstalk
Push your code up to your Forge server.
Log into Forge, click through the interface to your Site, click the Queue Workers tab, and click Start Worker with all the defaults still entered. These defaults will start a worker that uses the beanstalkd driver, the "default" queue, and some default timeouts and tolerances. You now have a worker up and running, hitting your default queue on your beanstalkd server, managed and kept running by Supervisor.
That's it! Go trigger your Queue::push
code from earlier. That'll push the queue task up onto beanstalkd, the worker will pull it down and act on it, and then delete it, and the queue will be clean again. You're good to go, and now your users can breeze around your app, unaware of the raw processing power being thrown at their tasks. There are also plenty of other use cases for queues, but other people (and the docs) have already covered that well.
If you are having any trouble, or want to see evidence of the queue working, go check your logs--which, of course, by this point, are logging to Papertrail, right? Nice and easy.
]]>At any given point in your Laravel app's life, it'll have a particular "environment" defined, which is a string that identifies which environment (local, production, staging, etc.) you're running in. There are two hard-coded environments (production and testing) that have Laravel-specific meanings, but you can create as many as you want.
The ruleset you provide Laravel for detecting your environment happens in bootstrap/start.php
. The default is to pass in an associative array, which allows you to change your environment based on your machine's hostname:
$env = $app->detectEnvironment(array(
'local' => array('your-machine-name'),
));
You can also pass a Closure (anonymous function) to detectEnvironment. Our team at Tighten often uses Environment Variables:
$env = $app->detectEnvironment(function() {
if (getenv('APP_NAME_ENV')) {
return getenv('APP_NAME_ENV');
} else {
return 'local'; // Default
}
});
Forge originally only stored its environment variables in .env.ENVIRONMENTNAMEHERE.php
files, which cause problems with this method of environment detection. This is no longer the case.
However, based on some of my experiences with queues and other aspects of Forge, I'd still highly recommend you use the associative array form of environment detection rather than using a Closure. Try the following:
$env = $app->detectEnvironment(array(
'production' => array('your-forge-staging-server-host-name-here'),
'local' => array('homestead', '.local')
));
This means: Set the environment to "production" on my Forge server, set it to "local" if it's running on my Homestead vagrant VM, and set it to "local" if it's running locally on anyone's development machine (learn more about .local).
]]>Note: Forge's Papertrail option is no longer baked in to the UI, but it's still possible to manually connect your Forge servers to Papertrail.
But Forge has a pre-built connection with an app called Papertrail that takes all of your logs and pulls them into one, easy-to-view SAAS. And it's painfully easy to set up.
Sign up for Forge. Get your server up and running. Go to the Server page and click the Monitoring tab.
Sign up at papertrailapp.com and navigate to the Setup Systems page. Grab the "your systems will log to" URL and copy it.
Paste this value into your Forge Monitoring tab. Wait for it to provision--this should just take a few minutes.
At this point Papertrail is logging your system logs, but not your Laravel logs. To add Laravel logs, you'll need to add a Syslog Monolog handler.
This would be best in a service provider, but if you just want to test it out you can put at the top of app/routes.php
.
We're basically going to create a new Syslog handler for Monolog and push it onto the logging stack.
$monolog = Log::getMonolog();
$syslog = new \Monolog\Handler\SyslogHandler('papertrail');
$formatter = new \Monolog\Formatter\LineFormatter('%channel%.%level_name%: %message% %extra%');
$syslog->setFormatter($formatter);
$monolog->pushHandler($syslog);
As you can see, we're creating a syslog handler, naming it, providing it a formatter template (which you can customize to your liking), and then pushing it on the monolog handler stack.
Go back to your Papertrail account and view your logs. That's it! Try throwing an exception in your code to see your Laravel logs show up in your Papertrail account.
]]>For an interview with Taylor, the story of how Taylor released hints and puzzles to the community before Laracon, and for a little bit about why Forge is great, check out Adam Engebretson's post Laravel Forge - How Taylor Just Saved Us Hours of Work.
Here, we'll be focusing mainly on the gritty details of Forge. What does it do? How? How can I use it today?
Homestead is a way to develop your Laravel sites locally that provides a consistent environment that's in line with Taylor's preferred development stack: Nginx, MySQL/PostgreSQL, Beanstalk, Redis, Memcached, and PHP 5.5 (at the moment.) Homestead is a pre-configured Vagrant box that matches the stack provided by Forge. Learn more about Homestead here.
Forge is a way to host your Laravel sites on a consistent, predictable, and flexible environment. It's a PAAS (platform as a service) that manages and simplifies the deployment of your code to Digital Ocean, Linode, Amazon EC2, or Rackspace, including creating a remote hosting environment with feature parity to Homestead so your dev and prod environments can be as close as possible. If you're familiar with FortRabbit, Heroku, or EngineYard Cloud, Forge is similar to those--but it's also pretty unique in a few ways.
So! Let's walk through the steps to getting your first Laravel app deployed on Forge.
Sign up for Forge. Simple enough.
In order to give Forge access to your sites' codebases, you want to visit the Dashboard to give Forge permission to access your Github account.
You will be able to manually SSH into your sites at any point, but the automated code deployment features--and there are quite a few of them--all depend on Github.
Also, every new server will automatically be authenticated to your Github account via an SSH key, so even if you deploy manually, Forge's Github connection will make it simpler and smoother.
On the servers page you'll find the first option is to create a new server. Now is a good time to talk about the distinction between the various hosts.
The four hosts you can use with Forge (DigitalOcean, Linode, Rackspace, and Amazon) all allow the user (or a service like Forge acting in your stead) to configure the settings for a virtual machine that will run your web site. In creating Homestead and Forge, Taylor has created a single "preferred" environment for Laravel, and allowed you (via Homestead) to develop locally using it and (via Forge) to deploy to any of these four services.
So, the environment will be the same no matter which server you use. If you don't have an account with any of the four servers, DigitalOcean is easy and user-friendly, and if you sign up with this link I get a referral bonus. :)
In order to grant Forge control over your servers, you'll need to give it API keys (and possibly other pieces of information, depending on the server).
Log in. In the left nav, the bottom item says "API"; click that and you'll get your Client ID and be able to generate or retrieve an API key.
Create or revoke API keys under My Profile
Getting your Access Key ID and Secret Access Key
Once you've entered your authentication information, it's time to create your first server!
Every server needs to have a unique name. Refresh the page a few times and you'll see that Forge is auto-generating readable nonsense names. You can put anything you want in here, but note that it'll need to be unique across your provider's servers, so you probably won't get away with "my-web-site" or something similar.
Pick your poison. I always start with the cheapest option and upgrade as needed.
Many hosts have servers all across the world. If you don't have any other infrastructure-related reason to choose a particular region, try to pick the region closest to the majority of your userbase.
If you create your first database here, you'll save yourself the step below titled "Create Your Database."
For now, just leave this unchecked. These are experimental and flaky (in general, and more so on Forge). Save that for an experiment on a later day.
That's it! You'll see the server appear down below with the status; it'll start with Building and then move on to Provisioning, and within a few minutes you'll be up and running with an Active server.
Once the server goes active, you can click the Manage button...
OK, so you now have a Forge-provisioned server up and running. It's actually live on the Internet! In the Status Bar at the top of the page, you'll see two IP addresses, one with parentheses around it.
The first (non-parentheses-wrapped) IP address is your server's public IP address, and if you type this IP into a browser you'll see the `phpinfo()`` for your server.
The second (parentheses-wrapped) IP address is your server's internal IP address. Just leave that one alone for now.
You should be receiving an email in a moment with all of the configuration details you need, but you won't be able to use them until you upload your SSH key to Forge.
Click the SSH Keys tab and upload your local SSH Key. How to create and copy an SSH Key?
If you intend to visit the site you're setting up via a real domain (versus just testing it by visiting the IP address), you should set that up now, because DNS record changes can take a while to update. Go to your domain name provider, add an ANAME record for the domain or subdomain, and point it to the public IP address of your server. That's it!
Every Forge server can have many sites on it. Traditionally, a site will be one site/one domain/one repository, although if you have multiple subdomains or domains served from the same Laravel repo, you may have multiple domains served from one site. But the hard rule is that there is always a one site to one repo relationship.
By default there's a "default" site installed, which you can use if you plan on this server only serving one web site (or if you're just trying this out and plan to access the app only via its IP address). Otherwise, delete it and follow the directions below to create your first site.
First, set a domain for the site, and optionally set a web directory. The domain is the domain name you expect to access this site from; the web directory is the directory within your repo that you want the files served from. As you can see, the default is "public" because that's the directory Laravel sites serve from.
Before you attach your Laravel codebase to your Forge site, you'll want to create its database--otherwise your initial migrations will fail. You may have done this already in the "Create your first server" step, but if not, or if you'd just like to get access to the database server now, here's how to do it.
Follow the instructions at the Docs' Databases Section section to get your SQL admin app connected (use Sequel Pro if you don't have one). Note that the usernames and passwords you're entering there are not accessible from Forge, but will have arrived in an email from Forge titled "Server Provisioned." Save this information. That's the only place it lives now.
Connect to your database server, and add a database with the name of your Laravel app's expected database. Now, when you first hook up your Laravel codebase, Forge will be able to run your migrations correctly.
You'll want to modify your app's code for whichever environment this will be. Update your database settings to use the host, username, and password specified in the "Server Provisioned" email. Now push your Forge-ready code up to Github.
Something I learned from Heroku: I store my server IP, database username, and database password in environment variables (see Forge's Site manager, and the Environment tab). Then in my database.php I just set the host, username, and password to be getenv('db_host')
, getenv('db_username')
, and getenv('db_password')
, which gets the environment variables with those names.
When you're setting environment variables in Forge, note that the "Environment (optional)" flag allows you to determine which Laravel environment these will be passed to; the default is "production".
Once the site is ready for you, and you've created your database and updated your app and pushed the changes to Github, click the Manage button for your site. Choose a github repo (for example, http://github.com/mattstauffer/confomo would be "mattstauffer/markedstyle"), branch (default to "master"), and sync strategy (Quick Deploy deploys the code to your Forge server every time you push to your chosen Github branch; not using Quick Deploy means you can push manually via Envoy or SSH, or use the "Deploy Now" button whenever you're ready).
You can also choose how to handle Composer and migrations. By default I would check both.
Once you complete this step Forge will download your files and (optionally) run composer install
and artisan migrate
.
If everything went smoothly, your site should be up and running on your server, ready for you to go visit it. You can visit the public IP and you'll see the default site.
You should also be able to visit any domains you pointed to this server earlier. Check it out--you just set up your first Forge-provisioned site!
But what if you want to git pull manually? To live-edit the site with Vim (NO!)? To run your database seeds? Or whatever other mischevious things you might want to do with SSH access?
You can use an SSH config file to create easy shortcuts to your Forge server, but even without that, you can just ssh YOUR_IP_HERE
from the command line and, since you've already uploaded your SSH key, you should be good to go. ls
from your root directory and you'll see a folder for every site you set up.
If you decide to manage deploying your site manually, you can always use the "Deploy Site" button in the control panel, but you can also set up an Envoy script. For example:
@servers(['forge' => 'YOUR_IP_HERE'])
@task('deploy', ['on' => 'forge'])
cd your-site-domain-here/
git pull
@endtask
@task('full-deploy', ['on' => 'forge'])
cd your-site-domain-here/
git pull
composer install
php artisan migrate
@endtask%
By default, all manual deploys and quick deploys from Forge will use the following general script:
cd {dir}
git pull origin {branch}
composer install
php artisan migrate
But if you want to customize this, you can click Edit Deploy Script on the Site details page.
Forge recipes are pre-written Bash scripts (shell scripts) that you can run at any point on any of your servers. Symlinks, installations, downloads, or whatever else you want--write simple shell scripts, give them a name, and easily run them any time you want, with a checkbox for each server you might want to run it on.
Forge also emails the output of the script to you after it's done running.
Forge has server tabs for Scheduler and Daemons and Site tabs for Queue Workers.
Scheduler allows you to create and delete cron jobs, and view logs from previous cron runs.
Daemons are long-running scripts managed by Supervisor.
Queue workers are Artisan queue workers, managed by Supervisor, based on beanstalkd.
Link your servers via private networking, create app & database server, etc. -- choose the servers you want to link together and Forge will automatically create private networks between your servers. Just configure your firewalls in the Forge GUI and they'll be up and talking together.
If you want to use Forge to set up your account and then remove its access to your servers, just Archive your server and Forge will be disconnected. You can always un-archive it later if you need to use Forge again for something else.
The bottom of your dashboard has a Recent Events section where you can see every significant event that hass happened across all of your servers.
The best way to monitor your apps, though, is to go to the Monitoring tab on a server and connect the server directly to your NewRelic and/or Papertrail accounts.
Forge originally only used .env.ENVIRONMENTNAMEHERE.php
files for environment variables, so there were some difficulties using env vars in your environment detection in bootstart/start.php. This is no longer the case.
This having been said, based on some of my experiences with queues and other services, I would highly recommend using hostnames to set your environment, not env vars. For example:
$env = $app->detectEnvironment(array(
'production' => array('your-forge-server-name-here'),
'local' => array('homestead', '.local'),
));
This will set your homestead instance to be "local", many local dev machines (Mac and Linux, not sure about Windows) to automatically be "local", and set your Forge server as "production".
Your site will, by default, be accessible via your public IP. Be careful that this doesn't leave you showing the depths of your site to the world--for example, if your detectEnvironment
settings in bootstrap/start.php
defaults to an environment with debug
set to true
, someone might be able to get a dev-style Whoops page by visiting your IP address, which would reveal config details about your app that you don't want public.
Over the next few weeks I'll be posting how-to's about various aspects of using Forge: multiple domains and subdomains in one environment, advanced environment variables and environment detection, queues and daemons, PostgreSQL, and more.
Check back here, or follow me at @stauffermatt to see when I post more.
Did I make any mistakes, or do you have any questions? @stauffermatt for that too. Thanks!
]]>First, navigate to your app directory and add PostgreSQL as a Heroku add-on (if you haven't followed the first tutorial, you'll need to do that first to install the Heroku toolset, get this Laravel app connected to a Heroku app, etc.):
$ heroku addons:create heroku-postgresql:hobby-dev
You should see output like this:
Adding heroku-postgresql:hobby-dev on app-name-here... done, v14 (free)
Attached as HEROKU_POSTGRESQL_COLOR_URL
Database has been created and is available
! This database is empty. If upgrading, you can transfer
! data from another database with pgbackups:restore.
Use `heroku addons:docs heroku-postgresql` to view documentation.
The environment variable for your PostgreSQL database URL has a COLOR
variable in the name itself: HEROKU_POSTGRESQL_PINK_URL
, HEROKU_POSTGRESQL_BLUE_URL
, etc... and depending on the server you're on, that color may be different. That means you can't necessarily rely on the name of that environment variable always being the same, so you want to be sure to not rely on the HEROKU_POSTGRESQL_COLOR_URL
for your database configurations. Read on for how to handle it instead.
At any point, you can find both the name of your PostgreSQL variable and its value by running the following:
$ heroku config | grep HEROKU_POSTGRESQL
You should see something like the following:
HEROKU_POSTGRESQL_RED_URL: postgres://user3123:passkja83kd8@ec2-117-21-174-214.compute-1.amazonaws.com:6212/db982398
If you check out your heroku config
, you should now see that you have a DATABASE_URL
that's set to the same value as the HEROKU_POSTGRESQL_COLOR_URL
. That is the environment variable you want to work from.
On apps with multiple databases, or if you didn't get the DATABASE_URL
set properly for some reason, you can promote a particular server to be the primary database:
$ heroku pg:promote HEROKU_POSTGRESQL_RED_URL
At this point your database should be up and running. Now, let's edit your Laravel config to point to the PostgreSQL database.
Once again, if this is real app, you're going to want to only be making these changes in your production configuration settings, but for now we're just hacking at a dummy app.
First, change the value of 'default'
in app/config/database.php
to 'pgsql'
.
'default' => 'pgsql',
Then, just like we did with MySQL, set the following at the top of your database.php:
$url = parse_url(getenv("DATABASE_URL"));
$host = $url["host"];
$username = $url["user"];
$password = $url["pass"];
$database = substr($url["path"], 1);
Then change your pgsql
entry in that same file to be the following:
'pgsql' => array(
'driver' => 'pgsql',
'host' => $host,
'database' => $database,
'username' => $username,
'password' => $password,
'charset' => 'utf8',
'prefix' => '',
'schema' => 'public',
),
That's it! Commit and push and migrate:
$ git add .
$ git commit -m "Convert to use Heroku PostgreSQL database"
$ git push heroku master
$ heroku run php /app/artisan migrate
Check out your Heroku URL in the browser, and you should see the app running just like it was when it was MySQL:
[]
Congratulations! You're now a Laravel + Heroku + database pro.
So, there's good news and bad news (but more good than bad).
First, the bad: Heroku doesn't use MySQL on its servers. But that's it for the bad news.
The good news: Heroku uses PostgreSQL, which is significantly better than MySQL in many ways. Also, Laravel has a PostgreSQL driver built in. Also, there are MySQL Heroku add-ons you can purchase for smaller scale work, and they each have free intro versions.
As you can see, we're in great shape here. This post will cover Laravel, Heroku, and MySQL, and the next post will cover the same with PostgreSQL.
NOTE: At the time of writing this post, JawsDB didn't exist, so we'll just be covering ClearDB.
If you're set on MySQL, there's a Heroku Add-on called ClearDB that provides relatively first-class MySQL support to Heroku apps.
So, first, let's install ClearDB. Navigate to your app directory locally and use the Heroku toolbelt to install the add-on:
$ heroku addons:add cleardb
You should see the following:
Adding cleardb on app-name-here... done, v6 (free)
Use `heroku addons:docs cleardb` to view documentation.
You're now on the limited free tier of the ClearDB add-on. You can retrieve your database URL at any point by running the following command, which retrieves your Heroku config and then greps out just the line beginning with CLEARDB_DATABASE_URL:
$ heroku config | grep CLEARDB_DATABASE_URL
It should look something like this:
CLEARDB_DATABASE_URL: mysql://h95b1k2b5k2kj:ont1948@us-cdbr-east-05.cleardb.net/heroku_nt9102903498235n?reconnect=true
Don't worry about writing that down, though, because it's going to be passed into our app as an environment variable.
For more thorough instructions on setting up ClearDB, check out their provisioning docs.
Next, let's modify our Laravel app to connect to ClearDB.
First, let's add a few quick lines to our Laravel app that make it actually need a database. Thankfully, there's already a user authentication model and system built into Laravel, so let's just hit it for our default route. Edit routes.php
and change its contents to the following:
Route::get('/', function()
{
return User::all();
});
Now generate a migration to create the users table:
$ php artisan migrate:make create_users_table --create=users
Next, let's add in our Heroku creds. Again, if you're actually working on a real site, you should be making sure you're just editing the database credentials specifically for your production
environment here, but since we're just hacking out a dummy app here, we're going to edit app/config/database.php
directly.
For now let's just do a bit of procedural code at the top of database.php. We're telling our app to get the CLEARDB_DATABASE_URL environment variable and then split it out.
$url = parse_url(getenv("CLEARDB_DATABASE_URL"));
$host = $url["host"];
$username = $url["user"];
$password = $url["pass"];
$database = substr($url["path"], 1);
Remember, the CLEARDB_DATABASE_URL
value we looked at before was just a URL, so we're using PHP's parse_url
function to pull out the pieces of that URL and convert them into Laravel-config-friendly variables.
Now just find the 'mysql'
entry in the database.php config array, and change the values accordingly:
'mysql' => array(
'driver' => 'mysql',
'host' => $host,
'database' => $database,
'username' => $username,
'password' => $password,
'charset' => 'utf8',
'collation' => 'utf8_unicode_ci',
'prefix' => '',
),
Of course, this will break locally, so let's test it out on Heroku.
$ git add .
$ git commit -m "Add Heroku creds and update default route to hit the DB."
$ git push heroku master
Now we need to remotely run our migration, which we do with the following:
$ heroku run php /app/artisan migrate
If everything runs without errors, you should be able to visit your site in your browser and see the seeds of your future Laravel app:
[]
(This is the JSON-encoded dump of your users table, which, at the moment, is empty).
That's it! You now know how to use MySQL on Heroku, run migrations and other artisan commands remotely, and deploy your code to your Heroku app.
You can see that it takes a bit of work, but you can get MySQL databases up and running on Heroku with Laravel quickly and simply. Check out my next post for how to get Laravel working with Heroku PostgreSQL.
]]>So, let's take a look at the fastest and simplest way to get a stock Laravel install up and running on Heroku.
Sign up for a Heroku account and install the Heroku toolbelt, a command-line toolkit for managing your Heroku apps.
However you prefer, get your Laravel project initialized.
$ laravel new laravel-heroku
$ cd laravel-heroku
Heroku knows which processes to run for your app based on a configuration file called a Procfile
. The default apache2
process (if you don't use a Procfile
) points to the web root, not to /public
... so we need to create a custom Procfile
to serve the site from /public
.
Add a file with the name Procfile
(capitalization matters) that contains this line:
web: vendor/bin/heroku-php-apache2 public/
(more details: https://devcenter.heroku.com/articles/custom-php-settings#setting-the-document-root)
OK, our code is ready to go. Let's get it into git.
$ git init
$ git add .
$ git commit -m "Initial commit of stock Laravel install."
Since you have the Heroku Toolbelt installed, you can create and modify your apps directly from the command line.
$ heroku create
The output/prompt should look something like this:
± heroku create
Creating app... !
▸ Invalid credentials provided.
heroku: Press any key to open up the browser to login or q to exit:
Logging in... done
Logged in as me@email.com
Creating app... done, ⬢ app-name-here
https://app-name-here.herokuapp.com/ | https://git.heroku.com/app-name-here.git
Write down or just remember the "app-name-here"; this is the unique identifier for the Heroku app you just created. The app will run on the Heroku Cedar stack by default.
APP_KEY
Let's deploy an environment key for our encryption key now.
Generate a new key:
php artisan key:generate --show
Copy the output of that, and then run this command:
heroku config:set APP_KEY=the_key_you_copied_here
You should see this output:
Setting config vars and restarting ⬢ app-name-here... done, v3
APP_KEY: the_key_you_copied_here
With Heroku, you push new code to your site by pushing to the heroku
git remote.
$ git push heroku master
Look for this at the end:
----> Launching... done, v3
http://app-name-here.herokuapp.com/ deployed to Heroku
Check it:
Heroku's PHP support is not the only thing that has gotten an upgrade; their PHP support documentation is now fantastic. Check it out for many more tutorials and much more in-depth introductions. Heroku - Getting Started With PHP
They've also, since I wrote this article, added Laravel-specific documentation: Heroku - Getting Started with Laravel
If you have any issues with this walkthrough, please let me know on Twitter so i can keep this up-to-date. Thanks!
]]>TL;DR: I just wrote an app called Markedstyle, a centralized repository for custom Marked CSS styles.
I've been looking for an opportunity to rapidly prototype a simple web app using Laravel 4 (the best way to learn is to practice), so when I saw Brett Terpstra's recent post on collecting styles for his app Marked I knew I had a perfect opportunity.
I wish I had run an actual timer, but within a few very brief evening programming sessions I had a basic REST-ish app allowing me to create, view, and edit styles and users. I used Bootstrap for rapid front-end prototyping, but followed the tips at Bootstrap without all the debt to save myself the pain of Bootstrap classes in the HTML.
The ease with which I can prototype an app in Laravel is incredible, but even more important is that I feel confident that I wouldn't have to change much to consider this a production app. Getting the basic resource routes and controllers up and running, as well as database schemas and seeds prepared, was incredibly simple. Suffice it to say that I'm in love with Laravel.
I filled the app with the styles currently available at Brett's Git repo, and made a wishlist for features I hope to develop soon--voting, Sass/SCSS style upload, etc. I then deployed it to ArcusTech, so it should be incredibly fast.
The site is online at markedstyle.com and the source code is publicly viewable at github.com/mattstauffer/markedstyle. I know this isn't a login-every-day kind of thing, and that it'll be a lot more useful once I'm tracking clicks and votes, but I figured I'd put it out there for now and see what the Marked and Laravel communities think of it.
Thanks! Please share any thoughts (or if I totally screwed up the code, pull requests).
Markdown is an incredible tool for formatting text in a light, clean manner; text written in Markdown is legible before it's processed, easily written by hand, and easily processed by computer.
Marked is a simple, clean program that monitors local Markdown files and actively updates a preview window every time they change.
]]>StackOverflow to the rescue. This is focused around Unix-based systems, so, sorry Windows folks.
Here's mine:
remote_dir=/www/remote-user-name/sql_backup_directory
target_dir=~/local_sql_directory
destination=ssh_username@hostname
scp $destination:`ssh $destination ls -1td $remote_dir/\* | head -1` $target_dir
Of course, you'll want to update remote_dir, target_dir, and destination's values to be appropriate for your system.
Note: When I first created this file, the permissions weren't correct for cron to run it. I chmod'ed the file to 777 to test to make sure it works, but I still need to figure out what the absolute best chmod value would be.
Note 2: You'll need to have ssh key authorization already set up for this domain so scp & ssh can access it properly.
First, edit your crontab:
$ crontab -e
Then, paste your line:
0 2 * * * /path/to/shell/script.sh
The above line runs the script at 2am every day, but you can adjust the timing--learn more about cron timing.
That's it! You now have a local cron job running once a day, copying your remotely generated files to your local computer.
By default, the crontab editor uses vim. Trust me when I tell you that learning vim is absolutely worth it--check out the intro to vim that ships with vim by running $ vimtutor
to learn the basics in 5-10 minutes. But if you need to just get by for now:
Type i
to enter Insert mode, and now you can type. To save, type esc
and then type :x
and hit return
. That's it!
Are you looking for a server-side cron job to set up daily SQL dumps? Here's mine:
0 1 * * * /usr/bin/mysqldump -h host-name-here -uusername-here -ppassword-here --databases database-name-here > /path/to/sql/dumps/$( date + \%Y-\%m-\%d).sql
That generates files with the date name as the file name, every night at 1am. You can adjust the timing, and be sure to put your own: route to mysqldump, MySQL host name, username, password, and database name. Notice that the user flags and password flags do not have spaces--so if your username is bob
and your password is secret
, those flags would be -ubob
and -psecret
.
How do you debug problems? If you add the following line to the top of your cron job, it will email you any output from the script.
MAILTO=youremailaddress@hotmail.com
I haven't yet tested to see if this works locally, though, so I'll update this when I have.
]]>The PHP ternary operator is often a key player in the terseness vs. clarity argument. A brief refresher: The PHP ternary operator allows you to write single-line comparisons, replacing code like the following:
<?php
if (isset($value)) {
$output = $value;
} else {
$output = 'No value set.';
}
with this:
<?php
$output = isset($value) ? $value : 'No value set.';
The second code example is clearly simpler, and in many cases (although certainly not all), it still retains enough clarity to be a worthy tool. There's plenty of debate about whether the ternary operator sacrifices clarity at the expense of conciseness; let's just say it's a tool, and like any tool, it can be used well or poorly.
The syntax for the regular ternary operator is (expression) ? value if truthy : value if falsy
. The expression can also just be a single variable, which will test whether the variable is truthy or falsy:
<?php
$output = $value ? $value : 'No value set.';
The problem is, the above example is both common and annoyingly repetitive: Having to write $value
twice like that just feels wrong.
Well, I discovered today that PHP 5.3 introduced an even terser syntax for this use of the ternary operator. You can learn more at the docs, but here's how we could make the above example even more concise:
<?php
$output = $value ?: 'No value set.';
If this looks familiar, it's because it is: this is exactly how PHP shortens other operators, like shortening this:
<?php
$value = $value . $other_value;
to this:
<?php
$value .= $other_value;
For the sake of clarity, just because we can shorten something doesn't mean we should. But, when we can write terse code that is also appopriately clear, we should, and this feature allows us to DRY up the ternary operator in many cases.
]]>The problem is that there are thousands of PHP sites, PHP apps (and PHP developers) that are low quality, untested, procedural, antiquated, poorly designed and poorly commented. Much PHP code is flaky, hackable, and ugly, and the language itself was clearly not designed with modern coding standards and conventions in mind--its naming is inconsistent, its OOP features are frustratingly-designed afterthoughts, and it suffers from myriad other painful little problems that sum up to a big pain in the rear.
I've developed in CodeIgniter, a PHP-based MVC framework, for years, and while the addition of a framework and of the MVC methodology have brought some much-needed structure to my PHP world, I've still longed for the beauty, simplicity, and expressiveness of Ruby on Rails. After working on a few Rails apps for work, I came close to fully changing over my loyalties.
And then came Laravel. Laravel is a light, flexible, expressive framework that capitalizes on the best parts of PHP and does its best to supplant the places where PHP is most lacking. Laravel is powerful, extensible, rewritable, and is in active development by some really brilliant people. The learning curve is low, the power is high, and I cannot begin to express the joy it's given me to work in this framework.
Laravel has singlehandedly rekindled my love for PHP, and my hope that it may actually contend with Ruby (especially on Rails) as a legitimate language for web app development.
Taylor Otwell, I owe you a beer. And my PHP-coding life.
]]>Before LessConf 2013, I didn't see myself an entrepreneur, let alone a "startup founder." I knew what an entrepreneur and a founder looked like, and I was neither.
I worked as a freelance web developer during college, but after graduation I left the tech world to work at a non-profit. My job included raising funds to support my salary, something I was ill equipped to do. One of the hardest parts of my job was the administrative work—tracking contacts, interactions, gifts, pledges.
I eventually built a simple web app to simplify that administration, and soon added user accounts to let my friends use it. But to keep it fast and secure, I had to spend time and money, so I started charging a small fee to use it. Eventually I named the app Karani for Fundraisers, got an accountant, and incorporated Karani Productions Inc.
Then my wife had a beautiful baby boy and I got an amazing job, and I hired some folks part time to help out with running Karani. Today Karani has a few hundred users, bringing in a few thousand dollars a month, which goes right out the door to pay my employees. Since leaving my non-profit job I've worked with a variety of startups, enterprises, and development shops, but I've still never considered myself an entrepreneur.
So when I went to LessConf, I was interested in learning at the feet of creators & entrepreneurs. I didn't place myself in the same space as the folks who make up the bulk of LessConf: entrepreneurs, creators, and innovators. I was just a strategist, an implementor.
Over the span of the conference I realized something: I founded Karani. Before I founded it, it didn't exist, and it does now--it's a startup. I did the entire thing without any funding, so it's a bootstrapped startup. And it all just sort of happened, without me knowing terms like churn and angel investor (or reading Hacker News or TechCrunch). So, there I am: an accidental bootstrap founder.
Further, my day job is entrepreneurial--pioneering technologies, consulting for startups, expanding our development team, updating our technology stack, and now joining the leadership team of the company. At day, at night, I'm creating, innovating, starting, founding--I just needed some superfriends to help me realize it.
]]>