Today I Learned

A Hashrocket project

179 posts about #sql

Count true values with postgres

I can try to count everything that is not null in postgres:

psql> select count(x.x) from (values (null), ('hi'), (null), ('there')) x (x);
2

But if I try to count everything that is true in postgres, I don’t get what I want:

psql> select count(x.x) from (values (false), (true), (false), (true)) x (x);
4

I can, however, take advantage of the postgres ability to cast boolean to int:

psql> select true::int;
1
psql> select false::int;
0

Using ::int I get:

psql> select count(x.x::int) from (values (false), (true), (false), (true)) x (x);
4

Postgres is still counting everything that is not null, but what if use sum instead?

psql> select sum(x.x::int) from (values (false), (true), (false), (true)) x (x);
2

Because everything is either a 0 or a 1, sum behaves like count.

Now you can do something like this:

psql> select sum((status = 'Awesome')::int) as awesomes, sum((status = 'Terrible')::int) as terribles from statuses;

PostgreSQL Triggers and Views 🐘

Today I learned a lot about PostgreSQL.

  1. Views can have triggers!
  2. Deleting such a view also deletes the trigger!

To demonstrate, let’s create a table and record, and a view that looks at our table.

create table some_table (name varchar);
insert into some_table values ('some name');
create view some_view as select name from some_table;

Now, we need a test function:

create or replace function always_null() returns trigger as $always_null$
  begin
    return null;
  end;
$always_null$ language plpgsql;

Let create a trigger for our function:

create trigger some_table_trigger
  instead of delete on some_view
  for each row execute procedure always_null();

Here’s our view… with a trigger attached!

backup=# \d some_view
                   View "public.some_view"
 Column |       Type        | Collation | Nullable | Default
--------+-------------------+-----------+----------+---------
 name   | character varying |           |          |
Triggers:
    some_table_trigger INSTEAD OF DELETE ON some_view FOR EACH ROW EXECUTE PROCEDURE always_null()

Our trigger can be inspected (lots of interesting info omitted):

backup=# select * from information_schema.triggers;
-[ RECORD 1 ]--------------+--------------------------------
trigger_name               | some_table_trigger

Drop the view, and lose the trigger:

backup=# drop view some_view;
DROP VIEW
backup=# select * from information_schema.triggers;
(0 rows)

PostgreSQL: Table or View?

I’m doing some database exploration today and needed to find out if a collection of tables are all tables, or if some of them are views. By querying the information schema via the following: select table_name, table_type from information_schema.tables;, I learned more about each table and its type:

> select table_name, table_type from information_schema.tables;
              table_name               | table_type
---------------------------------------+------------
 cats                                  | BASE TABLE
 dog                                   | BASE TABLE
 dogs_and_cats                         | VIEW

Combine Records from Different Tables

Let’s say you have 2 users and 3 categories and you want a query to return the combination of all the records, resulting in 6 rows.

You can use cross join to do that:

select
  users.id as user_id,
  categories.id as category_id
from users cross join categories


 user_id | category_id
---------+-------------
       1 |           1
       1 |           2
       1 |           3
       2 |           1
       2 |           2
       2 |           3

Create a temp table from values

In postgres, if you are looking for a way to create a quick data set to experiment with you can create a temporary table using the values expression.

create temp table test as values ('a', 1), ('b', 2), ('c', 3);

In postgres, a temp table is a table that will go away at the end of the session in which it was created.

The types of the columns are inferred as you can see when examining the table in psql:

> \d test
               Table "pg_temp_3.test"
 Column  |  Type   | Collation | Nullable | Default
---------+---------+-----------+----------+---------
 column1 | text    |           |          |
 column2 | integer |           |          |

What happens if we try to mix types?

> create temp table test as values ('a', 1), (1, 'c');
ERROR:  invalid input syntax for integer: "a"

You can also be specific about types by casting the values of the first row.

> create temp table test as values ('a'::varchar(1), 1::decimal), ('b', 1);
> \d test
                    Table "pg_temp_3.test"
 Column  |       Type        | Collation | Nullable | Default
---------+-------------------+-----------+----------+---------
 column1 | character varying |           |          |
 column2 | numeric           |           |          |

Choose only one row for a given value w Distinct

If you have a table with many rows but you only want one row for a given value, you can use distinct on (<column>)

select distinct on (letter) * 
from (
values 
('y', 512), 
('y',128), 
('z', 512), 
('x',128), 
('z', 256)) 
as x (letter, number);

Which produces

letter | number
--------+--------
 x      |    128
 y      |    512
 z      |    512
(3 rows)

Distinct implicitly orders by the specified column ascending. Putting order by letter at the end of the select statement produces the exact same output (and execution plan).

Distinct chooses the first row for the given column after sorting, so changing the sort order for the second column will change the results.

Here are the results after adding a order by letter, number asc to the above select statement.

 letter | number
--------+--------
 x      |    128
 y      |    128
 z      |    256
(3 rows)

Values clause in a select statement

You’ve all seen the VALUES clause before. It is typically used to insert data.

insert into colors (name, brightness) 
values ('red', 10), ('black', 0), ('blue', 5);

You can also use the VALUES clause in a select statement:

select * from (values ('red', 10), ('black', 0), ('blue', 5)
);

This is great when experimenting with different parts of sql at the command line.

Additionally, values is a first class expression on its own:

> values (1,2), (2,3);

 column1 | column2
---------+---------
       1 |       2
       2 |       3
(2 rows)

Generating a postgres query with no rows

For testing purposes, it is nice to be able to simulate a query that does not result in any rows.

select * from generate_series(0, -1) x;
 x
---
(0 rows)

This is because (from the docs):

When step is positive, zero rows are returned if start is greater than stop.

Step (the third argument to generate series) defaults to 1, so anything where the from has to go backwards to get to the to will result in 0 rows.

select * from generate_series(4, 3) x;
 x
---
(0 rows)

And now you can see how sum() behaves without any rows:

select sum(x) from generate_series(1, 0) x;
sum
-----
   ΓΈ
(1 row)

Ohhh, it returns null. I hope you’re planning for that.

Group by and order by can use aliases

In PostgreSQL the alias of a selected expression can be used in the group by and order by clauses instead of repeating the expression.

For example, this:

select left(lower(email), 1) as first_letter, count(*) as num
from users
group by left(lower(email), 1)
order by count(*) desc;

Can be replaced with this:

select left(lower(email), 1) as first_letter, count(*) as num
from users
group by first_letter
order by num desc;

PostgreSQL query with an array of regexes

I recently wanted to query my Postgres database by matching a column based on an array of regular expressions:

To query where the column matches all expressions in the array:

select * 
from my_table 
where my_column ilike all (array['%some%', '%words%'])

To query where the column matches at least one, but not necessarily all, of the expressions in the array:

select * 
from my_table 
where my_column ilike any (array['%some%', '%words%'])

First Postgres Queries: a Join and an Alias

To see some 2018 TIL stats, I learned to write some Postgres queries. One was:

select developers.username, count(*) from posts join developers on posts.developer_id = developers.id  where published_at >= '2018-01-01' and published_at < '2019-01-01' group by developers.username order by count(*) desc

This gave me the number of posts in 2018 associated with each username. I joined the posts table and the developers table with join developers on posts.developer_id = developers.id. That lines up the developer_id column in the posts table to the id column in the developers table. Now I can select the username column from the developers table: developers.username.

Next, I wanted to see the average number of likes per post for each developer. I do that by having the query evaluate the expression sum(likes)/count(*). I followed that with the phase as average. This creates an alias that can save some thinking and typing later in the query. The final query:

select developers.username, sum(likes)/count(*) as average from posts join developers on posts.developer_id = developers.id  where published_at >= '2018-01-01' and published_at < '2019-01-01' group by developers.username order by average desc

Congrats Taylor, Brian, and Jack for having the most loved posts of 2018!

Change the delimiter in mysql for procedures

Creating procedures in mysql has a weird quirk. When you terminate a statement within the procedure with a ; the entire create procedure statement gets executed up until the semi-colon. This causes a syntax error.

-> create procedure thing()
-> begin
-> set @x = 0;
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '' at line 3

To get past this, you need to declare a different delimiter so that mysql know when you are ready to execute the entire create procedure statement. To change the delimiter to //, run:

delimiter //

Now you can execute the create procedure statement by putting // after end:

create procedure thing()
begin
  set @x = 0;
end //

Convert A String To A Timestamp In PostgreSQL

If you have a string that represents a point in time, there are a couple ways that you can convert it to a PostgreSQL timestamptz value.

If the string is in ISO 8601 format, then it can be simply cast to timestamptz.

> select '2018-10-24'::timestamptz;
      timestamptz
------------------------
 2018-10-24 00:00:00-05

A more general purpose approach is to use the to_timestamp function.

> select to_timestamp('2018-10-24', 'YYYY-MM-DD');
      to_timestamp
------------------------
 2018-10-24 00:00:00-05

The first argument is our string-to-be-converted in whatever format. The second argument is another string describing in what format that string is.

Note: Both of these approaches produce a timestamptz value.

Doing Date Math In MySQL

MySQL has an array of functions for interacting with date and datetime values. If you’d like to do math with a date to compute a date in the future or the past, you can use the DATE_ADD() and DATE_SUB() functions.

mysql> select now() Now, date_add(now(), interval 10 minute) '10 Minutes Later';
+---------------------+---------------------+
| Now                 | 10 Minutes Later    |
+---------------------+---------------------+
| 2018-10-18 15:53:29 | 2018-10-18 16:03:29 |
+---------------------+---------------------+

mysql> select now() Now, date_sub(now(), interval 9 day) '9 Days Earlier';
+---------------------+---------------------+
| Now                 | 9 Days Earlier      |
+---------------------+---------------------+
| 2018-10-18 15:54:01 | 2018-10-09 15:54:01 |
+---------------------+---------------------+

There are equivalent ADDDATE() and SUBDATE() functions if you prefer.

Check out the docs for more details.

Group dates by week in Postgres

Timestamps are everywhere in our postgres tables. To group them together you can use date_trunc to normalize the timestamp by minute, hour, day, etc.

You can also normalize the date by week.

> select date_trunc('week', now());
2018-10-08 00:00:00+00

The above example returns the first day of the week for the current moment which is a Monday. Postgres truncates dates to Monday rather than Sunday.

To group dates by week just use the truncated value in both the select clause and the group clause.

select count(*), 
  date_trunc('week', request_time) 
from requests 
group by 
  date_trunc('week', request_time) 
order by date_trunc desc ;

Reference rejected values on conflict in postgres

I have a table of fruits and I have their quantity:

create table fruits (name text primary key, quantity int);

In that table I have a row for apples:

insert into fruits (name, quantity) values ('apple', 10);

So generally, when I try to insert another apples row, because name is the primary key, I’ll get a duplicate key error:

insert into fruits (name, quantity) values ('apple', 11);
duplicate key value violates unique constraint "fruits_pkey"

So if I don’t know wether to insert or update I can use the on conflict functionality in postgres to set the quantity even if apples is already there.

insert into fruits (name, quantity) values ('apple', 13) on conflict (name) do update set quantity = excluded.quantity;

I can use the excluded special table to reference the row that was rejected by postgres and which contains the quantity that I want to update the row quantity to.

MySQL Average

MySQL; it’s like a calculator inside your computer. Today I learned how to compute the average of something using this popular RDBMS.

Let’s compute the average length of name in table dog.

mysql> select avg(char_length(name)) from dog;

Result:

+------------------------+
| avg(char_length(name)) |
+------------------------+
|                 9.2965 |
+------------------------+

Psql Watch

The Postgres REPL psql in includes a \watch command that repeatedly executes a command every n seconds. Here’s the official description:

\watch [ seconds ] Repeatedly execute the current query buffer (like \g) until interrupted or the query fails. Wait the specified number of seconds (default 2) between executions.

It executes the current query buffer, which you can print with \p:

tilex_dev=# \p
select title from posts order by inserted_at desc limit 1;

Run \watch and tail your query result as it changes:

tilex_dev=# \watch
Sun Jul 29 17:06:13 2018 (every 2s)

        title
----------------------
 Psql Watch
(1 row)

Time: 2.175 ms

source

Show materialized view definition in #postgresql

PSQL offers a handy \dv function for showing the definition of a view. That function does not seem to work on materialized views.

To see the definition of a materialized view use the following instead:

select pg_get_viewdef('search_documents');

Another option is to use \d+ search_documents which shows both the query and the columns.

(Replace search_documents with the name of your view.)

Case insensitive `in` query in #postgres #psql

If you have a list of strings and you want to query a column to get all the matching records, but you do not care about the casing, Postgres offers a cool and easy way of doing that with the citext extension.

Given this table:

id | company_name
 1 | Abibas
 2 | Nykey
 3 | Pumar

We want to match the following:

select company_name
from vendors 
where company_name in ('Abibas', 'NyKey', 'PUMAr');

First you will have to make sure you have the citext extension created if you haven’t already:

create extension citext;

Then you can cast the searched field to citext:

select company_name
from vendors 
where company_name::citext in ('Abibas', 'NyKey', 'PUMAr');

h/t joshbranchaud for helping me find this

Postgres automatically appends timezone

Dates can be notoriously hard, particularly when it comes to timezones.

If you select a date without timezone information and cast it to a data type w/timezone, Postgres will assume the timezone of the server:

select '2000-01-01'::timestamptz;
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚      timestamptz       β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ 2000-01-01 00:00:00-06 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

I’m located in CST hence the -06 at the end of the time specification (6 hours before UTC time).

To use a specific timezone such as UTC, instead of your server’s timezone which can be pretty arbitrary:

select '2000-01-01'::timestamptz at time zone 'UTC';
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚      timezone       β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ 2000-01-01 06:00:00 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Postgres now displays the time in UTC. Still probably not what you expected (6am?) - Postgres infers the 00:00:00 time in your timezone and converts it to UTC (in my case adding 6 hours).

So how would you get it to show 00:00:00 and still be in UTC timezone?

select '2000-01-01 00:00:00 UTC'::timestamptz at time zone 'UTC';
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚      timezone       β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ 2000-01-01 00:00:00 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

If you use timezone aware data types it is recommended to always specify the timezone when inserting data, otherwise you are in for a world of trouble.

Quick syntax reference for #SQL directly in #psql

Whenever I forget the syntax for a certain SQL command in Posgres I usually reach for Dash or simply search DuckDuckGo for the the specific command. That usually yields the Postgres official documentation website which is great…

Wouldn’t it be nice though if I could stay right inside psql and get the documentation I am looking for?

It would.. and it’s possible:

\h create index

create index screenshot

Use \h followed by the SQL command - this is not the full verbose documentation that you would find on the Postgres docs website but it’s more of a syntax reference - which is most of the time what you need.

If you are not sure what to type or simply want to explore new commands try typing \h without anything after it - you will see something like this:

slash h by itself screenshot

Postgres locale settings are inherited from OS

By default, postgres inherits locale settings from operating system, which can greatly affect sort. Comparing linux and unix, both using the locale en_US.UTF-8, we see the following sort outputs:

Unix

select name from unnest(array['No one', 'None', ' Not']) name order by name;
name
--------
 Not
No one
None

Linux

select name from unnest(array['No one', 'None', ' Not']) name order by name;
name
--------
None
No one
 Not

You’ll notice that on the linux system, whitespaces are ignored during sorting.

To get consistent behavior across operating systems, you can use the postgres provided C locale:

select name from unnest(array['No one', 'None', ' Not']) name order by name collate 'C';
name
--------
 Not
No one
None

Page and Search Through MySQL

Need to explore results in the MySQL REPL? The pager command can help.

Let’s set our pager of choice (we’ll use Less):

mysql> pager less
PAGER set to 'less'

Then, execute the query:

mysql> select * from users;

The result will be loaded in the Less pager. From here, we can navigate or search through the results.

Leave the custom pager with \n:

mysql> \n
PAGER set to stdout

List all available extensions in #Postgres

Postgres comes packed with extensions just waiting to be enabled!

To see a list of those extensions:

select * from pg_available_extensions;

This will list the extension’s name, default_version, installed_version, and the comment which is a one liner description of what the extension does.

Here’s an interesting one for example:

name              | earthdistance
default_version   | 1.1
installed_version | ΓΈ
comment           | calculate great-circle distances on the surface of the Earth

To enable an extension, simply call create extension on the name:

create extension if not exists earthdistance;

Show Create Statement For A Table In MySQL

In MySQL, you can get a quick rundown of a table using describe users. An alternative to this approach is to have MySQL show the create statement for a table.

> show create table users\G
*************************** 1. row ***************************
       Table: users
Create Table: CREATE TABLE `users` (
  `id` mediumint(8) unsigned NOT NULL AUTO_INCREMENT,
  `first_name` varchar(80) NOT NULL,
  `last_name` varchar(80) NOT NULL,
  `email` varchar(80) NOT NULL,
  `middle_initial` varchar(80) NOT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `unique_email` (`email`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1

This includes some additional information like primary key and index information. It is also a great way to study the SQL that it takes to create all the facets of a table.

See the show create table docs for more details.

h/t Jake Worth

Escaping String Literals With Dollar Quoting

String literals in PostgreSQL are defined by surrounding the content with the ' character. For string literals that contain the ' character, you may have seen it escaped with a preceding '.

> select 'Isn''t this nice?';
     ?column?
------------------
 Isn't this nice?

This is easy enough to do, but can be error prone and doesn’t work well if SQL is being programmatically generated. A great workaround is to escape string literals using what is called dollar quoting.

> select $$Isn't this even nicer?$$;
        ?column?
------------------------
 Isn't this even nicer?

Just wrap both ends in $$ instead of '.

source

Capitalize All The Words In PostgreSQL

PostgreSQL provides the string function initcap() as a way of capitalizing all words. In the process, it cleans up the casing of the remaining parts of the words.

Here are some examples of how it works.

> select initcap('hello, world');
   initcap
--------------
 Hello, World

> select initcap('HELLO, WORLD');
   initcap
--------------
 Hello, World

See the String Functions and Operators docs for more details.

Difference between `union` and `union all` in psql

union is an operator that operates on two sets of results, combining them:

chris=# select 1 union select 2;
 ?column?
----------
        1
        2
(2 rows)

It filters duplicate results though:

chris=# select 1 union select 1;
 ?column?
----------
        1
(1 row)

If you want to not worry about duplicate results and just stack the result sets on top of each other use union all:

chris=# select 1 union all select 1;
 ?column?
----------
        1
        1
(2 rows)

Postgres Natural joins squash unneeded columns

Dealing mostly with Rails style databases we generally write inner joins on ids and foreign keys but there are other ways to create joins, the most fun being the natural join.

The below query works like a natural join but uses inner join syntax. When two tables having columns with the same name, join on that column. You can see that both columns have been returned even though they have the same value.

chris=# select * from (select 1 x) foo inner join (select 1 x) bar on bar.x = foo.x ;
 x | x
---+---
 1 | 1
(1 row)

When using the natural join syntax the query is less verbose and only returns one column. Postgres knows that they have the same value, thats the whole point! So only present one column.

chris=# select * from (select 1 x) foo natural join (select 1 x) bar;
 x
---
 1
(1 row)

Postgres `coalesce` errors with multiple types

The Postgres function coalesce takes a variable number of arguments and returns the first non-null argument.

psql> select coalesce(null, null, null, null, 'hi!');
hi!

But if you include a number and a string, then Postgres throws an error.

psql> select coalesce(1, 'hi!');
ERROR:  invalid input syntax for integer: "hi!"

Coalesce will try to coerce value to the type of the first argument if possible.

psql> select coalesce(1, '2');
1

Mysql behaves a bit differently, it allows values of different types.

mysql> select coalesce(1, 'hi!');
1

The postgres way makes a bit more sense to me. What is the type of a column if there are multiple possible types? Is it always a string? But I wanted a number? I feel like the additional type safety will help catch errors a bit earlier.

Operators to get a string or more json in Postgres

There are two operators in Postgres that I have trouble telling apart. The -> operator and the ->> operator. They both take a json object on the left and a key on the right, but result in different types.

Given json in postgres that looks like this:

> select '{"a": {"b": 2}}'::jsonb;
{"a": {"b": 2 }}

I can traverse the data structure with the -> operator because it returns a data structure.

> select '{"a": {"b": 2}}'::jsonb  -> 'a' -> 'b';
2

I can get a value as a string from the json with the ->> operator.

> select '{"a": {"b": 2}}'::jsonb ->> 'a';
{"b": 2 } -- ITS A STRING

But if you use the ->> operator to get a string, then you can’t traverse the data structure with the -> operator anymore.

> select '{"a": {"b": 2}}'::jsonb ->> 'a' -> 'b';
ERROR:  operator does not exist: text -> unkown

Show Tables That Match A Pattern In MySQL

An unfamiliar MySQL database with tons of tables can be a difficult thing to navigate. You may have an idea of the kind of table you are looking for based on a domain concept you’ve seen elsewhere.

You can pare down the results returned by show tables by including a like clause with a pattern. For example, this statement will show me only tables that have the word user in them:

> show tables like '%user%';
+-------------------------------+
| Tables_in_jbranchaud (%user%) |
+-------------------------------+
| admin_users                   |
| users                         |
+-------------------------------+

Dump A MySQL Database To A File

The mysqldump client is a handy tool for creating a backup or snapshot of a MySQL database. The standard use of this command produces an alphabetical series of statements that comprise the structure and data of the specified database. It directs all of this to stdout. You’ll likely want to redirect it to a file.

$ mysqldump my_database > my_database_backup.sql

The output will include special comments with MySQL directives that disable things like constraint checking. This is what allows the output to be in alphabetical order without necessarily violating any foreign key constraints.

If you need to dump multiple databases, include the --databases flag with a space-separated list of database names. Or dump all of them with --all-databases.

See man mysqldump for more details.

Display Output In A Vertical Format In MySQL

Output for tables with lots of columns can be hard to read and sometimes overflow the terminal window. Consider the output from Show Indexes For A Table:

> show indexes in users;
+-------+------------+--------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| Table | Non_unique | Key_name     | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |
+-------+------------+--------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| users |          0 | PRIMARY      |            1 | id          | A         |           0 |     NULL | NULL   |      | BTREE      |         |               |
| users |          0 | unique_email |            1 | email       | A         |           0 |     NULL | NULL   |      | BTREE      |         |               |
+-------+------------+--------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+

We can vertically orient the output of a statement by terminating it with \G instead of ; (or \g).

> show indexes in users\G
*************************** 1. row ***************************
        Table: users
   Non_unique: 0
     Key_name: PRIMARY
 Seq_in_index: 1
  Column_name: id
    Collation: A
  Cardinality: 0
     Sub_part: NULL
       Packed: NULL
         Null:
   Index_type: BTREE
      Comment:
Index_comment:
*************************** 2. row ***************************
        Table: users
   Non_unique: 0
     Key_name: unique_email
 Seq_in_index: 1
  Column_name: email
    Collation: A
  Cardinality: 0
     Sub_part: NULL
       Packed: NULL
         Null:
   Index_type: BTREE
      Comment:
Index_comment:

Show Indexes For A Table In MySQL

When describing a table, such as users:

> describe users;
+------------+-----------------------+------+-----+---------+----------------+
| Field      | Type                  | Null | Key | Default | Extra          |
+------------+-----------------------+------+-----+---------+----------------+
| id         | mediumint(8) unsigned | NO   | PRI | NULL    | auto_increment |
| first_name | varchar(80)           | NO   |     | NULL    |                |
| last_name  | varchar(80)           | NO   |     | NULL    |                |
| email      | varchar(80)           | NO   | UNI | NULL    |                |
+------------+-----------------------+------+-----+---------+----------------+

We can see in the Key column that there’s a primary key and a unique key for this table on id and email, respectively.

These keys are indexes. To get more details about each of the indexes on this table, we can use the show indexes command.

> show indexes in users;
+-------+------------+--------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+
| Table | Non_unique | Key_name     | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type |
+-------+------------+--------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+
| users |          0 | PRIMARY      |            1 | id          | A         |           0 |     NULL | NULL   |      | BTREE      |
| users |          0 | unique_email |            1 | email       | A         |           0 |     NULL | NULL   |      | BTREE      |
+-------+------------+--------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+

List Databases And Tables In MySQL

If you’ve started a mysql session, but haven’t connected to a particular database yet, you can list the available databases like so:

> show databases;
+-----------------------------+
| Database                    |
+-----------------------------+
| information_schema          |
| my_app_dev                  |
+-----------------------------+

If you are curious about the tables in a particular database, you can list them by specifying the database’s name:

> show tables in my_app_dev;
+------------------------------+
| Tables_in_my_app_dev         |
+------------------------------+
| pokemons                     |
| trainers                     |
+------------------------------+

Alternatively, you can connect to the database of interest and then there is no need to specify the name of the database going forward.

> use my_app_dev;
> show tables;
+------------------------------+
| Tables_in_my_app_dev         |
+------------------------------+
| pokemons                     |
| trainers                     |
+------------------------------+

`any` and `all` special Postgres constructs

To determine if a value exists in a subquery or scalar value list you can use the in operator like:

> select 1 in (1, 2)
true

any and all go a step further by allowing for an operator in the special construct which is used against each value in the subquery or array (no scalar value list for these constructs).

any examples:

> select 1 = any(Array[1, 2, 3])
true
> select 1 > any(Array[1, 2, 3])
false

all examples

> select 17 <> any(Array[2, 4, 8])
true
> select 17 < any(Array[2, 4, 8])
false

some is a synonym for any.