Absinthe manages to do a lot of interesting things during its compilation process, and today we're going to look a bit at how that works. We'll look closely at how it uses some metaprograming tricks and module attributes to provide compile-time schema validation for us.
It's pretty amazing (to me, at least) that when we use Absinthe, we can have a really simple, easy-to-use API to define our schema and we still get a good amount of compile-time type checking out of it! For example, if we try and use a type that hasn't yet been defined, we'll see an error like this in our terminal when we try and compile our application:
1== Compilation error in file lib/blog_web/schema.ex ==
2** (Absinthe.Schema.Error) Invalid schema:
3/home/devon/sandbox/absinthe_tutorial/lib/blog_web/schema/account_types.ex:10: User_state :custom_enum is not defined in your schema.
4
5 Types must exist if referenced.
6
7
8 (absinthe 1.4.16) lib/absinthe/schema.ex:271: Absinthe.Schema.__after_compile__/2
9 (stdlib 3.13.2) lists.erl:1267: :lists.foldl/3
10 (stdlib 3.13.2) erl_eval.erl:680: :erl_eval.do_apply/6
11 (elixir 1.11.2) lib/kernel/parallel_compiler.ex:314: anonymous fn/4 in Kernel.ParallelCompiler.spawn_workers/7
The fact that this happens is cool on its own, but how they manage to do this is what I think is really cool. It takes a lot of really tricky (but interesting) usage of modules and module attributes to make it work, and that's what we'll be covering today. But before we can get to the actual type checking, we need to take a quick look at how one defines a schema with Absinthe, and then how that schema is compiled to create those modules and module attributes using Elixir compilation callbacks.
Defining a Schema with Absinthe
To define our GraphQL schema using Absinthe, we need to write a single module in which that schema
is declared, and in that module we need to use Absinthe.Schema
. If your schema is small enough
then doing that in one file is easy enough:
1defmodule BlogWeb.Schema do
2 use Absinthe.Schema
3
4 alias BlogWeb.Resolvers
5
6 object :user do
7 field :id, :id
8 field :name, :string
9 field :posts, list_of(:post) do
10 resolve &Resolvers.Content.list_posts/3
11 end
12 end
13
14 object :post do
15 field :id, non_null(:id)
16 field :title, non_null(:string)
17 field :body, non_null(:string)
18 field :user, non_null(:user)
19 end
20
21 input_object :post_params do
22 field :id, non_null(:id)
23 field :title, non_null(:string)
24 field :body, non_null(:string)
25 field :user_id, non_null(:id)
26 end
27
28 query do
29 field :posts, list_of(:post) do
30 resolve(&Resolvers.Content.list_posts/3)
31 end
32 end
33
34 mutation do
35 field :create_post, :post do
36 arg(:params, non_null(:post_params))
37 resolve(&Resolvers.Content.create_post/3)
38 end
39
40 field :update_post, :post do
41 arg(:params, non_null(:post_params))
42 resolve(&Resolvers.Content.update_post/3)
43 end
44
45 field :delete_post, :post do
46 arg(:id, non_null(:id))
47 resolve(&Resolvers.Content.delete_post/3)
48 end
49 end
50end
However, once you start building out your application and things get bigger, you generally end up
breaking the schema up into multiple "schema fragment" files and importing the types defined in
those fragments into your schema using the Absinthe.Schema.Notation.import_types/2
and the Absinthe.Schema.Notation.import_fields/2
macros.
To do that with our schema above we might end up doing something like what is below, with each set
of types defined in its own module, each of which calls use Absinthe.Schema.Notation
.
We can imagine that each module is defined in its own file, although they technically don't need
to be:
1defmodule BlogWeb.Schema.UserTypes do
2 use Absinthe.Schema.Notation
3
4 alias BlogWeb.Resolvers
5
6 object :user do
7 field :id, :id
8 field :name, :string
9 field :posts, list_of(:post) do
10 resolve &Resolvers.Content.list_posts/3
11 end
12 end
13end
14
15defmodule BlogWeb.Schema.PostTypes do
16 use Absinthe.Schema.Notation
17
18 alias BlogWeb.Resolvers
19
20 object :post do
21 field :id, non_null(:id)
22 field :title, non_null(:string)
23 field :body, non_null(:string)
24 field :user, non_null(:user)
25 end
26
27 input_object :post_params do
28 field :id, non_null(:id)
29 field :title, non_null(:string)
30 field :body, non_null(:string)
31 field :user_id, non_null(:id)
32 end
33
34 object :post_queries do
35 field :posts, list_of(:post) do
36 resolve(&Resolvers.Content.list_posts/3)
37 end
38 end
39
40 object :post_mutations do
41 field :create_post, :post do
42 arg(:params, non_null(:post_params))
43 resolve(&Resolvers.Content.create_post/3)
44 end
45
46 field :update_post, :post do
47 arg(:params, non_null(:post_params))
48 resolve(&Resolvers.Content.update_post/3)
49 end
50
51 field :delete_post, :post do
52 arg(:id, non_null(:id))
53 resolve(&Resolvers.Content.delete_post/3)
54 end
55 end
56end
57
58defmodule BlogWeb.Schema do
59 use Absinthe.Schema
60
61 import_types(Absinthe.Type.Custom)
62 import_types(BlogWeb.Schema.UserTypes)
63 import_types(BlogWeb.Schema.PostTypes)
64
65 alias BlogWeb.Resolvers
66
67 query do
68 import_fields(:post_queries)
69 end
70
71 mutation do
72 import_fields(:post_mutations)
73 end
74end
But how does Absinthe know that when we're referencing the :post
type in the definition of our
:user
type, the :post
is a valid type to use? Well, that's where the fun stuff come in!
How Elixir's Compilation Callbacks Work
Well, to know how Absinthe works its magic, first we need to know a bit about Elixir's compilation
callbacks. A compilation callback is, as it sounds, a function that is executed either before,
during, or after compilation takes place. There are a three compilation callbacks, but the two we
care about for today are the
@before_compile
and
@after_compile
callbacks.
These are two functions that are called, as you would assume, before and after compilation of
a module. The before_compile
callback receives as an argument the compilation __ENV__
, which
is a struct containing information about the compilation process. More info on what exactly is in
there can be found in the docs for Macro.Env
.
Likewise, the after_compile
callback receives that same compilation __ENV__
, and also the
compiled bytecode for the module.
These two callbacks give us the opportunity to set up some things that might be needed for
compilation in our before_compile
callback, and then some checking of things that have just been
compiled in our after_compile
callback. That's exactly how Absinthe uses those two features
for its schema compilation and schema validation.
How Absinthe Does Schema Validation at Compile Time
So, what exactly is Absinthe doing when it compiles? Well, let's start with the compilation of
those schema fragments. Absinthe.Schema.Notation
contains a definition of a __before_compile__/1
function
which is used as the handler for the @before_compile
callback for each of those schema
fragments.
1defmacro __before_compile__(env) do
2 module_attribute_descs =
3 env.module
4 |> Module.get_attribute(:absinthe_desc)
5 |> Map.new()
6
7 attrs =
8 env.module
9 |> Module.get_attribute(:absinthe_blueprint)
10 |> List.insert_at(0, :close)
11 |> reverse_with_descs(module_attribute_descs)
12
13 imports =
14 (Module.get_attribute(env.module, :__absinthe_type_imports__) || [])
15 |> Enum.uniq()
16 |> Enum.map(fn
17 module when is_atom(module) -> {module, []}
18 other -> other
19 end)
20
21 schema_def = %Schema.SchemaDefinition{
22 imports: imports,
23 module: env.module,
24 __reference__: %{
25 location: %{file: env.file, line: 0}
26 }
27 }
28
29 blueprint =
30 attrs
31 |> List.insert_at(1, schema_def)
32 |> Absinthe.Blueprint.Schema.build()
33
34 [schema] = blueprint.schema_definitions
35
36 {schema, functions} = lift_functions(schema, env.module)
37
38 sdl_definitions =
39 (Module.get_attribute(env.module, :__absinthe_sdl_definitions__) || [])
40 |> List.flatten()
41 |> Enum.map(fn definition ->
42 Absinthe.Blueprint.prewalk(definition, fn
43 %{module: _} = node ->
44 %{node | module: env.module}
45
46 node ->
47 node
48 end)
49 end)
50
51 {sdl_directive_definitions, sdl_type_definitions} =
52 Enum.split_with(sdl_definitions, fn
53 %Absinthe.Blueprint.Schema.DirectiveDefinition{} ->
54 true
55
56 _ ->
57 false
58 end)
59
60 schema =
61 schema
62 |> Map.update!(:type_definitions, &(sdl_type_definitions ++ &1))
63 |> Map.update!(:directive_definitions, &(sdl_directive_definitions ++ &1))
64
65 blueprint = %{blueprint | schema_definitions: [schema]}
66
67 quote do
68 unquote(__MODULE__).noop(@desc)
69
70 def __absinthe_blueprint__ do
71 unquote(Macro.escape(blueprint, unquote: true))
72 end
73
74 unquote_splicing(functions)
75 end
76end
At first the code in that function might be tricky to understand, but the most important part of
understanding what's going on there is looking at the definition of the __absinthe_blueprint__/0
function. We can see that we're defining a function that returns a map, and that map contains a
lot of information about the state of things before the current schema fragment was compiled.
This __absinthe_blueprint__/0
function will be really important in the final compilation step
that we'll look at in a bit.
One other really intersting thing about this code this is important to notice is how many calls to
Module.get_attribute/2
there are! This is one of the things that Absinthe leans on heavily for
this compilation process - the use of modules and module attributes as essentially defining global
variables that can be accessed by other modules during their compilation! There are a lot of
calls to Module.get_attribute/2
and Module.put_attribute/3
in this module, and recognizing
this pattern helps us put the rest of the process into context.
The other thing happening here is that we're defining a lot of functions in a dynamically named
module! These functions contain yet more information, and we can see a bit more of how this is
used in the __before_compile__/1
function defined in Absinthe.Schema
:
1defmacro __before_compile__(_) do
2 quote do
3 @doc false
4 def __absinthe_pipeline_modifiers__ do
5 [@schema_provider] ++ @pipeline_modifier
6 end
7
8 def __absinthe_schema_provider__ do
9 @schema_provider
10 end
11
12 def __absinthe_type__(name) do
13 @schema_provider.__absinthe_type__(__MODULE__, name)
14 end
15
16 def __absinthe_directive__(name) do
17 @schema_provider.__absinthe_directive__(__MODULE__, name)
18 end
19
20 def __absinthe_types__() do
21 @schema_provider.__absinthe_types__(__MODULE__)
22 end
23
24 def __absinthe_types__(group) do
25 @schema_provider.__absinthe_types__(__MODULE__, group)
26 end
27
28 def __absinthe_directives__() do
29 @schema_provider.__absinthe_directives__(__MODULE__)
30 end
31
32 def __absinthe_interface_implementors__() do
33 @schema_provider.__absinthe_interface_implementors__(__MODULE__)
34 end
35
36 def __absinthe_prototype_schema__() do
37 @prototype_schema
38 end
39 end
40end
When each schema fragment is defined, it also defines a module that contains the information about
the module that was just defined - so for example, for our BlogWeb.Schema.UserTypes
module that
we used above, it will define a BlogWeb.Schema.UserTypes.Compiled
module. With this convention,
it allows Absinthe know where to look for information for each module that was compiled with some
schema information.
And now that all that work has been done during the compilation process, we can look at the
__after_compile__/2
callback defined in Absinthe.Schema
:
1def __after_compile__(env, _) do
2 prototype_schema =
3 env.module
4 |> Module.get_attribute(:prototype_schema)
5
6 pipeline =
7 env.module
8 |> Absinthe.Pipeline.for_schema(prototype_schema: prototype_schema)
9 |> apply_modifiers(env.module)
10
11 env.module.__absinthe_blueprint__
12 |> Absinthe.Pipeline.run(pipeline)
13 |> case do
14 {:ok, _, _} ->
15 []
16
17 {:error, errors, _} ->
18 raise Absinthe.Schema.Error, phase_errors: List.wrap(errors)
19 end
20end
This is where all that information and all that metaprogramming is actually used for some helpful
user features! In short, that callback will use all of the information that's been stored in
various module attributes and exposed by defining all of those different functions in all of those
.Compiled
modules to build up something that Absinthe calls a blueprint. This blueprint is
again what it sounds like - it contains the information for how documents will later by evaluated
against the current GraphQL schema during resolution. It then evaluates this blueprint, and if
there are any errors returned from that evaluation they're raised at the end of the compilation
process!
Clearly this is kind of a compilcated process, but it's also a cool way to use some of the basic features of the Elixir compiler to deliver value to users. Exploring this process helped me learn a lot about this method of compilation of applications, but it also made it clear to me that the Absinthe team has put a great deal of time and effort into making this user experience really great, and for that I'm very thankful!
P.S. If you'd like to read Elixir Alchemy posts as soon as they get off the press, subscribe to our Elixir Alchemy newsletter and never miss a single post!