Compare commits

...

77 Commits

Author SHA1 Message Date
0e6309d2df
. 2021-03-28 12:38:02 -04:00
7401fed6d3 updating some pacman stuff. need to finish objtypes.Repo and might need to tweak config writer. 2020-01-10 08:39:28 -05:00
ec28849f23 pacman-key initialization done. 2020-01-03 03:38:35 -05:00
dc70409c8d whew. 2019-12-30 20:22:10 -05:00
20b35044e4 soooo this is all cool and all, but i'm scrapping reading the config. it's dumb and bloaty. the tarballs all ship with the same config. 2019-12-30 19:49:10 -05:00
a4080121cd this... actually isn't even necessary, come to think. 2019-12-30 13:00:57 -05:00
c22b473b49 checking in before i change a regex pattern. this currently will grab commented out defaults, but we don't want that since it complicates things - so we hardcode in shipped defaults. 2019-12-30 12:59:52 -05:00
7f3b8b98aa i think that's it for logging on system libs 2019-12-30 05:34:34 -05:00
65b316c014 logging to network providers 2019-12-28 02:20:50 -05:00
25e86e75ff mdadm logging done, some value errors converted to type errors 2019-12-24 01:56:29 -05:00
a0442df77d check-in 2019-12-23 15:43:23 -05:00
c165e60d34 lvm logging done 2019-12-23 14:52:00 -05:00
48ab7f953f luks logging done 2019-12-22 11:59:49 -05:00
a65ef8232a pushing some updates; luks logging not done 2019-12-20 12:50:41 -05:00
4418348e78 logging added to config parser. 2019-12-19 17:00:16 -05:00
b4c9caefbd fixed the gpg thing. WHEW. what a PITA.
also fleshed out some logging.
2019-12-19 14:04:34 -05:00
f25e6bee2a starting to roll in some logging. still need to figure out what's going on with that gpg verifyData 2019-12-17 03:40:08 -05:00
1ae519bb40 hashing is done 2019-12-11 06:33:24 -05:00
d7d85c7d9d future proofing is good, but...
since print() was made a function in py3, i can predict at some point
that return will be made a func as well. sure, good.
but "return()" *currently* returns an empty tuple. We want to
explicitly return None for testing purposes.
2019-12-11 04:33:15 -05:00
a1bc613979 XML validation is a LOT cleaner now 2019-12-11 04:32:07 -05:00
06c99221d2 okay, some minor changes to the XML stuff. getting there 2019-12-10 06:59:47 -05:00
c7ce23ff0f restructure this a bit, and timezone done 2019-12-07 00:16:46 -05:00
782ed08a3c services done. that was easy! 2019-12-06 22:57:32 -05:00
9ec1b29160 users is done 2019-12-06 21:19:42 -05:00
af32ba1eed users *almost* done 2019-12-06 05:30:47 -05:00
9c58d3a551 i'll get there one day 2019-12-05 18:09:48 -05:00
c00dc3cbfa whoops 2019-12-05 18:08:07 -05:00
3b3cdb3f6d whoops, circular imports 2019-12-05 18:04:51 -05:00
ebfd164015 locales and console settings are done 2019-12-04 01:48:41 -05:00
7bd704b284 networking is done (probably) 2019-12-02 22:06:27 -05:00
edc78ea18e checking in before i do some major restructuring of wifi stuff in the xml/xsd 2019-12-01 05:07:20 -05:00
3e33abe0a6 forgot to add wifi settings 2019-11-30 01:22:28 -05:00
3a2eca4b98 i officially hate netctl now i think 2019-11-30 01:05:20 -05:00
5e57eb7bc5 networkmanager almost done; needs auto-dev for wifi/ethernet and handling of auto resolvers i think 2019-11-25 05:05:51 -05:00
2a3269e2e0 so turns out we DON'T need NM installed in the host to support it, and DON'T need GI. 2019-11-21 06:15:04 -05:00
brent s
b889d17581 updating; switching to desktop 2019-11-14 03:00:13 -05:00
brent s
8ee5137059 minor changes to xml, small additions to network 2019-11-12 01:27:48 -05:00
brent s
5371ae2361 stubbing network out 2019-11-11 21:42:58 -05:00
brent s
856d89f231 whoop, add'l todo 2019-11-10 05:42:51 -05:00
brent s
012865a6b1 wireless support. duh. 2019-11-10 05:39:33 -05:00
brent s
f68069a25e i'm pretty sure luks non-gi is now done 2019-11-10 01:37:15 -05:00
brent s
a1c126847c i do not. 2019-11-08 16:49:58 -05:00
brent s
7fc416f17c do i want to... 2019-11-08 16:49:41 -05:00
brent s
083c966cad lvm fallback done 2019-11-08 05:19:28 -05:00
brent s
b633b22f59 gi lvm done; added better size support and ability to specify PE size 2019-11-07 19:42:43 -05:00
brent s
00e9c546d7 i *think* i'm done the gi version of disk.lvm. LVM is such a mess. 2019-11-06 16:58:58 -05:00
brent s
fbd1d4b0f3 that's... a little better. gonna be more of a PITA in-code though. 2019-11-06 12:52:50 -05:00
brent s
5f8caf48d6 WHY IS LVM THE WORST THING ON THIS PLANET 2019-11-06 12:48:18 -05:00
brent s
f424938913 mdadm done 2019-11-06 07:33:15 -05:00
brent s
7e6736f6a2 stubbed out rest of storage things 2019-11-06 03:47:08 -05:00
brent s
d4de31dd67 filesystem and mounting done, other minor tweaks. need to do lvm, mdadm, luks 2019-11-06 02:21:04 -05:00
brent s
33ea96d1e1 notes 2019-11-05 07:07:16 -05:00
brent s
37124f066a whew. sorted out a lot of gi/BD and fallback inconsistencies... 2019-11-05 05:52:46 -05:00
brent s
46351329b8 namespace issue fixed! 2019-11-04 23:38:32 -05:00
brent s
27978786b8 making a LOT of headway on the gi stuff but hitting a namespace issue 2019-11-04 23:18:28 -05:00
brent s
0ad8314d0b ... 2019-11-03 01:40:23 -05:00
brent s
d36368c4df some more groundwork 2019-11-01 03:43:14 -04:00
brent s
ca1f12f5bd soooo...
turns out ALL of the disk operations can be performed with
gobject-introspection.

BUT it's unlikely that that'll be available everywhere, or that
the Arch Linux releng team would include it, etc.
So we have fallbacks to mimic it.

BUT please try to use gobject-introspection with libblockdev,
because it's going to be a lot faster and a lot less error-prone.
2019-11-01 02:54:51 -04:00
brent s
9e5ff48926 D'OH 2019-10-31 23:20:05 -04:00
brent s
799ef58667 checking in - xsd doesn't seem to properly detect duplicate pv/lv elems 2019-10-31 22:35:31 -04:00
brent s
1ea84cbac0 i...should probably be using xs:ID? 2019-10-31 18:32:56 -04:00
brent s
c4386d55d1 cleaned up, etc. 2019-10-30 03:46:33 -04:00
brent s
f0d93658d0 size converter done 2019-10-30 03:29:12 -04:00
brent s
f96c815d8d checking in before i restructure some stuff 2019-10-29 22:37:36 -04:00
brent s
af2cd9dd0e gorram it, had to write my own md superblock parser. 2019-10-29 17:43:18 -04:00
brent s
4cdd61da7b adding partition flags 2019-10-29 15:42:09 -04:00
brent s
3a6e8843fe added support for partition flags in config, need to add in-code 2019-10-29 01:01:31 -04:00
brent s
036dd24098 gorram it pycharm, stop warning me about this gorram dollar sign. it doesn't need to and shouldn't be escaped in XSD pattern expressions. 2019-10-28 14:11:35 -04:00
brent s
837d7a4703 jthan is whining about adding him to acknowledgments 2019-10-28 11:39:25 -04:00
brent s
6519ce2f19 make some small changes 2019-10-28 06:05:57 -04:00
brent s
4527f1de91 fixing some regex escapes 2019-10-28 03:44:56 -04:00
brent s
313f217b36 xml/xsd revamp complete 2019-10-28 03:40:26 -04:00
brent s
7f1bbc5022 checking in some XSD work 2019-10-28 01:26:31 -04:00
brent s
9dada73cf0 whew. finally done block.py.
the msdos table primary/extended/logical thing was a pain but the
logic wasn't too bad.
2019-10-26 02:52:47 -04:00
brent s
305a0db34f checking in some stuff... i'm going to rework how i do the disk init to ALWAYS use freshDisk(). 2019-10-22 14:34:39 -04:00
brent s
108588827a still doing some work but checking in what i have so far 2019-10-09 07:18:10 -04:00
brent s
3ca56d7b5c man. some major restructuring, and envsetup.py is a kinda neat hack. 2019-09-30 22:08:37 -04:00
68 changed files with 8731 additions and 3222 deletions

30
.gitignore vendored
View File

@ -1,13 +1,23 @@
*.7z
*.bak *.bak

*.deb
# We don't need these in git. *.jar
screenlog* *.pkg.tar.xz
*.swp *.rar
*.lck *.run
*~ *.sig
.~lock.* *.tar
*.tar.bz2
*.tar.gz
*.tar.xz
*.tbz
*.tbz2
*.tgz
*.txz
*.zip
.*.swp
.editix .editix

.idea/
# and we DEFINITELY don't need these.
__pycache__/ __pycache__/
*.pyc test.py
test*.py

674
LICENSE Normal file
View File

@ -0,0 +1,674 @@
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007

Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.

Preamble

The GNU General Public License is a free, copyleft license for
software and other kinds of works.

The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.

When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.

To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.

For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.

Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.

For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.

Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.

Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.

The precise terms and conditions for copying, distribution and
modification follow.

TERMS AND CONDITIONS

0. Definitions.

"This License" refers to version 3 of the GNU General Public License.

"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.

"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.

To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.

A "covered work" means either the unmodified Program or a work based
on the Program.

To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.

To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.

An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.

1. Source Code.

The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.

A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.

The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.

The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.

The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.

The Corresponding Source for a work in source code form is that
same work.

2. Basic Permissions.

All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.

You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.

Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.

3. Protecting Users' Legal Rights From Anti-Circumvention Law.

No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.

When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.

4. Conveying Verbatim Copies.

You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.

You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.

5. Conveying Modified Source Versions.

You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:

a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.

b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".

c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.

d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.

A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.

6. Conveying Non-Source Forms.

You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:

a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.

b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.

c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.

d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.

e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.

A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.

A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.

"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.

If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).

The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.

Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.

7. Additional Terms.

"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.

When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.

Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:

a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or

b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or

c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or

d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or

e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or

f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.

All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.

If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.

Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.

8. Termination.

You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).

However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.

Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.

Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.

9. Acceptance Not Required for Having Copies.

You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.

10. Automatic Licensing of Downstream Recipients.

Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.

An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.

You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.

11. Patents.

A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".

A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.

Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.

In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.

If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.

If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.

A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.

Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.

12. No Surrender of Others' Freedom.

If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.

13. Use with the GNU Affero General Public License.

Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.

14. Revised Versions of this License.

The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.

Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.

If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.

Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.

15. Disclaimer of Warranty.

THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.

16. Limitation of Liability.

IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.

17. Interpretation of Sections 15 and 16.

If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.

END OF TERMS AND CONDITIONS

How to Apply These Terms to Your New Programs

If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.

To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.

<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>

This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.

You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.

Also add information on how to contact you by electronic and paper mail.

If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:

<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.

The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".

You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.

The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.

5
README Normal file
View File

@ -0,0 +1,5 @@
AIF-NG (Arch Installation Framework, Next Generation) is a means to install Arch Linux (https://www.archlinux.org/) in an unattended and automated fashion. Think of it as something akin to RedHat's Kickstart or Debian's Preseed for Arch Linux.

Be sure to import "aif" rather than importing any submodules directly, as deterministic logic is used to set up virtual names.

See https://aif-ng.io/ for more information about this project.

File diff suppressed because it is too large Load Diff

354
aif.xsd
View File

@ -1,354 +0,0 @@
<?xml version="1.0" encoding="UTF-8" ?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"
targetNamespace="http://aif.square-r00t.net"
xmlns="http://aif.square-r00t.net"
elementFormDefault="qualified">
<xs:annotation>
<xs:documentation>
See https://aif.square-r00t.net/ for more information about this project.
</xs:documentation>
</xs:annotation>
<!-- GLOBAL CUSTOM DATA TYPES -->
<xs:simpleType name="diskdev">
<xs:annotation>
<xs:documentation>
This element specifies a type to be used for validating storage devices, such as hard disks or mdadm-managed devices.
</xs:documentation>
</xs:annotation>
<xs:restriction base="xs:string">
<xs:pattern value="/dev/([A-Za-z0-9_]+/)?[A-Za-z0-9_]+[0-9]?" />
</xs:restriction>
</xs:simpleType>

<xs:simpleType name="diskfmt">
<xs:annotation>
<xs:documentation>
This element specifies a type to validate what kind of disk formatting. Accepts either GPT or BIOS (for MBR systems) only.
</xs:documentation>
</xs:annotation>
<xs:restriction base="xs:string">
<xs:pattern value="([Gg][Pp][Tt]|[Bb][Ii][Oo][Ss])" />
</xs:restriction>
</xs:simpleType>

<xs:simpleType name="disksize">
<xs:annotation>
<xs:documentation>
This element validates a disk size specification for a partition. Same rules apply as those in parted's size specification.
</xs:documentation>
</xs:annotation>
<xs:restriction base="xs:string">
<xs:pattern value="(\+|\-)?[0-9]+([KMGTP]|%)" />
</xs:restriction>
</xs:simpleType>

<xs:simpleType name="fstype">
<xs:annotation>
<xs:documentation>
This element validates a filesystem type to be specified for formatting a partition. See sgdisk -L (or the table at http://www.rodsbooks.com/gdisk/walkthrough.html) for valid filesystem codes.
</xs:documentation>
</xs:annotation>
<xs:restriction base="xs:token">
<xs:pattern value="[a-z0-9]+" />
</xs:restriction>
</xs:simpleType>
<xs:simpleType name="mntopts">
<xs:restriction base="xs:token">
<xs:pattern value="[A-Za-z0-9_\.\-]+(,[A-Za-z0-9_\.\-]+)*" />
</xs:restriction>
</xs:simpleType>

<xs:simpleType name="iface">
<xs:restriction base="xs:token">
<!-- https://github.com/systemd/systemd/blob/master/src/udev/udev-builtin-net_id.c#L20 lines 30-47. i have no idea if this will work. TODO: simplify, validate in-code. -->
<xs:pattern value="(auto|((en|sl|wl|ww)(b[0-9]+|c[a-z0-9]|o[0-9]+(n.*(d.*)?)?|s[0-9]+(f.*)?((n|d).*)?|x([A-Fa-f0-9]:){5}[A-Fa-f0-9]|(P.*)?p[0-9]+s[0-9]+(((f|n|d).*)|u.*)?)))" />
</xs:restriction>
</xs:simpleType>
<xs:simpleType name="netaddress">
<xs:restriction base="xs:string">
<!-- this is a REALLY LAZY regex. matching IPv4 and IPv6 in regex is ugly as heck, so we do that in-code. this is just a gatekeeper. -->
<xs:pattern value="(auto|[0-9\.]+/[0-9]{,2}|([A-Za-z0-9:]+)/[0-9]+)" />
</xs:restriction>
</xs:simpleType>
<xs:simpleType name="netproto">
<xs:restriction base="xs:token">
<xs:pattern value="(both|ipv4|ipv6)" />
</xs:restriction>
</xs:simpleType>
<xs:simpleType name="scripturi">
<xs:restriction base="xs:anyURI">
<xs:pattern value="(https?|ftps?|file)://" />
</xs:restriction>
</xs:simpleType>
<xs:simpleType name="devlang">
<xs:restriction base="xs:token">
<xs:pattern value="/(usr/)?s?bin/[A-Za-z0-9][A-Za-z\.\-]?" />
</xs:restriction>
</xs:simpleType>
<xs:simpleType name="nixgroup">
<xs:restriction base="xs:token">
<xs:pattern value="[_a-z][-0-9_a-z]*$?" />
</xs:restriction>
</xs:simpleType>
<xs:simpleType name="nixpass">
<xs:restriction base="xs:token">
<xs:pattern value="$(6$[A-Za-z0-9\./\+=]{8,16}$[A-Za-z0-9\./\+=]{86}|1$[A-Za-z0-9\./\+=]{8,16}$[A-Za-z0-9\./\+=]{22}|5$[A-Za-z0-9\./\+=]{8,16}$[A-Za-z0-9\./\+=]{43})" />
</xs:restriction>
</xs:simpleType>
<xs:simpleType name="pacuri">
<!-- <xs:restriction base="xs:anyURI"> -->
<xs:restriction base="xs:token">
<xs:pattern value="(file|https?)://.*" />
</xs:restriction>
</xs:simpleType>

<xs:simpleType name="scripttype">
<xs:restriction base="xs:token">
<xs:pattern value="(pre|post|pkg)" />
</xs:restriction>
</xs:simpleType>
<xs:simpleType name="bootloaders">
<xs:restriction base="xs:token">
<xs:pattern value="(grub|systemd|syslinux)" />
</xs:restriction>
</xs:simpleType>

<xs:simpleType name="authselect">
<xs:restriction base="xs:token">
<xs:pattern value="(basic|digest)" />
</xs:restriction>
</xs:simpleType>
<!-- ROOT -->
<xs:element name="aif">
<xs:complexType>
<xs:all>
<!-- BEGIN STORAGE -->
<xs:element name="storage" minOccurs="1" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<!-- BEGIN DISK -->
<xs:element name="disk" maxOccurs="unbounded" minOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="part" minOccurs="1" maxOccurs="unbounded">
<xs:complexType>
<xs:attribute name="num" type="xs:positiveInteger" use="required" />
<xs:attribute name="start" type="disksize" use="required" />
<xs:attribute name="stop" type="disksize" use="required" />
<xs:attribute name="fstype" type="fstype" use="required" />
</xs:complexType>
<xs:unique name="unique-partnum">
<xs:selector xpath="part" />
<xs:field xpath="@num" />
</xs:unique>
</xs:element>
</xs:sequence>
<xs:attribute name="device" type="diskdev" use="required" />
<xs:attribute name="diskfmt" type="diskfmt" use="required" />
</xs:complexType>
<xs:unique name="unique-diskdev">
<xs:selector xpath="disk" />
<xs:field xpath="@device" />
</xs:unique>
</xs:element>
<!-- BEGIN MOUNT -->
<xs:element name="mount" minOccurs="1" maxOccurs="unbounded">
<xs:complexType>
<xs:attribute name="order" type="xs:integer" use="required" />
<xs:attribute name="source" type="diskdev" use="required" />
<xs:attribute name="target" type="xs:token" use="required" />
<xs:attribute name="fstype" type="fstype" />
<xs:attribute name="opts" type="mntopts" />
</xs:complexType>
<xs:unique name="unique-mnts">
<xs:selector xpath="mount" />
<xs:field xpath="@order" />
<xs:field xpath="@source" />
<xs:field xpath="@target" />
</xs:unique>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
<!-- END MOUNT -->
<!-- END STORAGE -->
<!-- BEGIN NETWORK -->
<xs:element name="network" minOccurs="1" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="iface" minOccurs="1" maxOccurs="unbounded">
<xs:complexType>
<xs:attribute name="device" type="iface" use="required" />
<xs:attribute name="address" type="netaddress" use="required" />
<xs:attribute name="netproto" type="netproto" use="required" />
<xs:attribute name="gateway" type="netaddress" />
<xs:attribute name="resolvers" type="xs:string" />
</xs:complexType>
</xs:element>
</xs:sequence>
<xs:attribute name="hostname" type="xs:token" use="required" />
</xs:complexType>
<xs:unique name="unique-iface">
<xs:selector xpath="iface" />
<xs:field xpath="@address" />
<xs:field xpath="@netproto" />
</xs:unique>
</xs:element>
<!-- END NETWORK -->
<!-- BEGIN SYSTEM -->
<xs:element name="system" maxOccurs="1" minOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="users" minOccurs="1" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="user" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:sequence>
<xs:element name="home" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:attribute name="path" type="xs:token" />
<xs:attribute name="create" type="xs:boolean" />
</xs:complexType>
</xs:element>
<xs:element name="xgroup" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:attribute name="name" type="nixgroup" use="required" />
<xs:attribute name="create" type="xs:boolean" />
<xs:attribute name="gid" type="xs:boolean" />
</xs:complexType>
<xs:unique name="unique-grp">
<xs:selector xpath="xgroup" />
<xs:field xpath="@name" />
</xs:unique>
</xs:element>
</xs:sequence>
<xs:attribute name="name" type="xs:token" use="required" />
<xs:attribute name="uid" type="xs:token" />
<xs:attribute name="group" type="nixgroup" />
<xs:attribute name="gid" type="xs:token" />
<xs:attribute name="password" type="nixpass" />
<xs:attribute name="comment" type="xs:token" />
<xs:attribute name="sudo" type="xs:boolean" />
</xs:complexType>
</xs:element>
</xs:sequence>
<xs:attribute name="rootpass" type="nixpass" />
</xs:complexType>
<xs:unique name="unique-usr">
<xs:selector xpath="user" />
<xs:field xpath="@name" />
</xs:unique>
</xs:element>
<xs:element name="service" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:attribute name="name" type="xs:token" use="required" />
<xs:attribute name="status" type="xs:boolean" use="required" />
</xs:complexType>
<xs:unique name="unique-svc">
<xs:selector xpath="service" />
<xs:field xpath="@name" />
<xs:field xpath="@status" />
</xs:unique>
</xs:element>
</xs:sequence>
<xs:attribute name="timezone" type="xs:string" use="required" />
<xs:attribute name="locale" type="xs:string" use="required" />
<xs:attribute name="chrootpath" type="xs:string" use="required" />
<xs:attribute name="kbd" type="xs:token" />
<xs:attribute name="reboot" type="xs:boolean" />
</xs:complexType>
</xs:element>
<!-- END SYSTEM -->
<!-- BEGIN PACMAN -->
<xs:element name="pacman" maxOccurs="1" minOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="repos" maxOccurs="1" minOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="repo" maxOccurs="unbounded" minOccurs="1">
<xs:complexType>
<xs:attribute name="name" type="xs:token" use="required" />
<xs:attribute name="enabled" type="xs:boolean" use="required" />
<xs:attribute name="siglevel" type="xs:token" use="required" />
<xs:attribute name="mirror" type="pacuri" use="required" />
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="mirrorlist" maxOccurs="1" minOccurs="0">
<xs:complexType>
<xs:sequence>
<xs:element name="mirror" type="pacuri" maxOccurs="unbounded" minOccurs="1" />
</xs:sequence>
</xs:complexType>
<xs:unique name="unique-mirrors">
<xs:selector xpath="mirror" />
<xs:field xpath="." />
</xs:unique>
</xs:element>
<xs:element name="software" maxOccurs="1" minOccurs="0">
<xs:complexType>
<xs:sequence>
<xs:element name="package" maxOccurs="unbounded" minOccurs="1">
<xs:complexType>
<xs:attribute name="name" type="xs:token" use="required" />
<xs:attribute name="repo" type="xs:token" />
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
<xs:attribute name="command" type="xs:string" />
</xs:complexType>
</xs:element>
<!-- END PACMAN -->
<!-- BEGIN BOOTLOADER -->
<xs:element name="bootloader" maxOccurs="1" minOccurs="1">
<xs:complexType>
<xs:attribute name="type" type="bootloaders" use="required" />
<xs:attribute name="target" type="xs:token" use="required" />
<xs:attribute name="efi" type="xs:boolean" />
</xs:complexType>
</xs:element>
<!-- END BOOTLOADER -->
<!--- BEGIN SCRIPTS -->
<xs:element name="scripts" maxOccurs="1" minOccurs="0">
<xs:complexType>
<xs:sequence>
<xs:element name="script" minOccurs="1" maxOccurs="unbounded">
<xs:complexType>
<xs:attribute name="uri" type="scripturi" use="required" />
<xs:attribute name="order" type="xs:integer" use="required" />
<xs:attribute name="execution" type="scripttype" use="required" />
<xs:attribute name="user" type="xs:string" />
<xs:attribute name="password" type="xs:string" />
<xs:attribute name="realm" type="xs:string" />
<xs:attribute name="authtype" type="authselect" />
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
<xs:unique name="unique-script">
<xs:selector xpath="script" />
<xs:field xpath="@order" />
</xs:unique>
</xs:element>
<!-- END SCRIPTS -->
</xs:all>
</xs:complexType>
</xs:element>
</xs:schema>

41
aif/__init__.py Normal file
View File

@ -0,0 +1,41 @@
import logging
##
try:
from . import constants
_has_constants = True
except ImportError:
from . import constants_fallback as constants
_has_constants = False
from . import log
from . import constants_fallback
from . import utils
from . import disk
from . import system
from . import config
from . import envsetup
from . import network
from . import software


_logger = logging.getLogger('AIF')
if not _has_constants:
_logger.warning('Could not import constants, so using constants_fallback as constants')


class AIF(object):
def __init__(self):
# Process:
# 0.) get config (already initialized at this point)
# 1.) run pre scripts*
# 2.) initialize all objects' classes
# 3.) disk ops = partition, mount*
# 3.) b.) "pivot" logging here. create <chroot>/root/aif/ and copy log to <chroot>/root/aif/aif.log, use that
# as new log file. copy over scripts.
# 4.) install base system*
# 4.) b.) other system.* tasks. locale(s), etc.*
# 5.) run pkg scripts*
# 6.) install kernel(?), pkg items*
# 6.) b.) remember to install the .packages items for each object
# 7.) write out confs and other object application methods*
# * = log but don't do anything for dryrun
pass

2
aif/config/__init__.py Normal file
View File

@ -0,0 +1,2 @@
from . import parser
# from . import generator # pending API

275
aif/config/parser.py Normal file
View File

@ -0,0 +1,275 @@
import copy
import logging
import os
import re
##
import requests
from lxml import etree, objectify

_logger = logging.getLogger('config:{0}'.format(__name__))


class Config(object):
def __init__(self, xsd_path = None, *args, **kwargs):
self.xsd_path = None
self.tree = None
self.namespaced_tree = None
self.xml = None
self.namespaced_xml = None
self.raw = None
self.xsd = None
self.defaultsParser = None
self.obj = None
_logger.info('Instantiated {0}.'.format(type(self).__name__))

def main(self, validate = True, populate_defaults = True):
self.fetch()
self.parseRaw()
if populate_defaults:
self.populateDefaults()
if validate:
self.validate()
self.pythonize()
return(None)

def fetch(self): # Just a fail-safe; this is overridden by specific subclasses.
pass
return(None)

def getXSD(self, xsdpath = None):
if not xsdpath:
xsdpath = self.xsd_path
raw_xsd = None
base_url = None
if xsdpath:
_logger.debug('XSD path specified.')
orig_xsdpath = xsdpath
xsdpath = os.path.abspath(os.path.expanduser(xsdpath))
_logger.debug('Transformed XSD path: {0} => {1}'.format(orig_xsdpath, xsdpath))
if not os.path.isfile(xsdpath):
_logger.error('The specified XSD path {0} does not exist on the local filesystem.'.format(xsdpath))
raise ValueError('Specified XSD path does not exist')
with open(xsdpath, 'rb') as fh:
raw_xsd = fh.read()
base_url = os.path.split(xsdpath)[0]
else:
_logger.debug('No XSD path specified; getting it from the configuration file.')
xsi = self.xml.nsmap.get('xsi', 'http://www.w3.org/2001/XMLSchema-instance')
_logger.debug('xsi: {0}'.format(xsi))
schemaLocation = '{{{0}}}schemaLocation'.format(xsi)
schemaURL = self.xml.attrib.get(schemaLocation,
'https://schema.xml.r00t2.io/projects/aif.xsd')
_logger.debug('Detected schema map: {0}'.format(schemaURL))
split_url = schemaURL.split()
if len(split_url) == 2: # a properly defined schemaLocation
schemaURL = split_url[1]
else:
schemaURL = split_url[0] # a LAZY schemaLocation
_logger.info('Detected schema location: {0}'.format(schemaURL))
if schemaURL.startswith('file://'):
schemaURL = re.sub(r'^file://', r'', schemaURL)
_logger.debug('Fetching local file {0}'.format(schemaURL))
with open(schemaURL, 'rb') as fh:
raw_xsd = fh.read()
base_url = os.path.dirname(schemaURL)
else:
_logger.debug('Fetching remote file: {0}'.format(schemaURL))
req = requests.get(schemaURL)
if not req.ok:
_logger.error('Unable to fetch XSD.')
raise RuntimeError('Could not download XSD')
raw_xsd = req.content
base_url = os.path.split(req.url)[0] # This makes me feel dirty.
_logger.debug('Loaded XSD at {0} ({1} bytes).'.format(xsdpath, len(raw_xsd)))
_logger.debug('Parsed XML base URL: {0}'.format(base_url))
self.xsd = etree.XMLSchema(etree.XML(raw_xsd, base_url = base_url))
_logger.info('Rendered XSD.')
return(None)

def parseRaw(self, parser = None):
self.xml = etree.fromstring(self.raw, parser = parser)
_logger.debug('Generated xml.')
self.namespaced_xml = etree.fromstring(self.raw, parser = parser)
_logger.debug('Generated namespaced xml.')
self.tree = self.xml.getroottree()
_logger.debug('Generated tree.')
self.namespaced_tree = self.namespaced_xml.getroottree()
_logger.debug('Generated namespaced tree.')
self.tree.xinclude()
_logger.debug('Parsed XInclude for tree.')
self.namespaced_tree.xinclude()
_logger.debug('Parsed XInclude for namespaced tree.')
self.stripNS()
return(None)

def populateDefaults(self):
_logger.info('Populating missing values with defaults from XSD.')
if not self.xsd:
self.getXSD()
if not self.defaultsParser:
self.defaultsParser = etree.XMLParser(schema = self.xsd, attribute_defaults = True)
self.parseRaw(parser = self.defaultsParser)
return(None)

def pythonize(self, stripped = True, obj = 'tree'):
# https://bugs.launchpad.net/lxml/+bug/1850221
_logger.debug('Pythonizing to native objects')
strobj = self.toString(stripped = stripped, obj = obj)
self.obj = objectify.fromstring(strobj)
objectify.annotate(self.obj)
objectify.xsiannotate(self.obj)
return(None)

def removeDefaults(self):
_logger.info('Removing default values from missing values.')
self.parseRaw()
return(None)

def stripNS(self, obj = None):
_logger.debug('Stripping namespace.')
# https://stackoverflow.com/questions/30232031/how-can-i-strip-namespaces-out-of-an-lxml-tree/30233635#30233635
xpathq = "descendant-or-self::*[namespace-uri()!='']"
if not obj:
_logger.debug('No XML object selected; using instance\'s xml and tree.')
for x in (self.tree, self.xml):
for e in x.xpath(xpathq):
e.tag = etree.QName(e).localname
elif isinstance(obj, (etree._Element, etree._ElementTree)):
_logger.debug('XML object provided: {0}'.format(etree.tostring(obj, with_tail = False).decode('utf-8')))
obj = copy.deepcopy(obj)
for e in obj.xpath(xpathq):
e.tag = etree.QName(e).localname
return(obj)
else:
_logger.error('A non-XML object was provided.')
raise ValueError('Did not know how to parse obj parameter')
return(None)

def toString(self, stripped = False, obj = None):
if isinstance(obj, (etree._Element, etree._ElementTree)):
_logger.debug('Converting an XML object to a string')
if stripped:
_logger.debug('Stripping before stringifying.')
obj = self.stripNS(obj)
elif obj in ('tree', None):
if not stripped:
_logger.debug('Converting the instance\'s namespaced tree to a string.')
obj = self.namespaced_tree
else:
_logger.debug('Converting the instance\'s stripped tree to a string.')
obj = self.tree
elif obj == 'xml':
if not stripped:
_logger.debug('Converting instance\'s namespaced XML to a string')
obj = self.namespaced_xml
else:
_logger.debug('Converting instance\'s stripped XML to a string')
obj = self.xml
else:
_logger.error(('obj parameter must be "tree", "xml", or of type '
'lxml.etree._Element or lxml.etree._ElementTree'))
raise TypeError('Invalid obj type')
obj = copy.deepcopy(obj)
strxml = etree.tostring(obj,
encoding = 'utf-8',
xml_declaration = True,
pretty_print = True,
with_tail = True,
inclusive_ns_prefixes = True)
_logger.debug('Rendered string output successfully.')
return(strxml)

def validate(self):
if not self.xsd:
self.getXSD()
_logger.debug('Checking validation against namespaced tree.')
self.xsd.assertValid(self.namespaced_tree)
return(None)


class LocalFile(Config):
def __init__(self, path, xsd_path = None, *args, **kwargs):
super().__init__(xsd_path = xsd_path, *args, **kwargs)
self.type = 'local'
self.source = path

def fetch(self):
orig_src = self.source
self.source = os.path.abspath(os.path.expanduser(self.source))
_logger.debug('Canonized path: {0} => {1}'.format(orig_src, self.source))
if not os.path.isfile(self.source):
_logger.error('Config at {0} not found.'.format(self.source))
raise ValueError('Config file does not exist'.format(self.source))
with open(self.source, 'rb') as fh:
self.raw = fh.read()
_logger.debug('Fetched configuration ({0} bytes).'.format(len(self.raw)))
return(None)


class RemoteFile(Config):
def __init__(self, uri, xsd_path = None, *args, **kwargs):
super().__init__(*args, **kwargs)
self.type = 'remote'
self.source = uri

def fetch(self):
r = requests.get(self.source)
if not r.ok():
_logger.error('Could not fetch {0}'.format(self.source))
raise RuntimeError('Could not download XML')
self.raw = r.content
_logger.debug('Fetched configuration ({0} bytes).'.format(len(self.raw)))
return(None)


class ConfigStr(Config):
def __init__(self, rawxml, xsd_path = None, *args, **kwargs):
super().__init__(*args, **kwargs)
self.type = 'raw_str'
self.source = rawxml

def fetch(self):
self.raw = self.source.encode('utf-8')
_logger.debug('Raw configuration (str) passed in ({0} bytes); converted to bytes.'.format(len(self.raw)))
return(None)


class ConfigBin(Config):
def __init__(self, rawbinaryxml, xsd_path = None, *args, **kwargs):
super().__init__(*args, **kwargs)
self.type = 'raw_bin'
self.source = rawbinaryxml

def fetch(self):
self.raw = self.source
_logger.debug('Raw configuration (binary) passed in ({0} bytes); converted to bytes.'.format(len(self.raw)))
return(None)


detector = {'raw': (re.compile(r'^\s*(?P<xml><(\?xml|aif)\s+.*)\s*$', re.DOTALL | re.MULTILINE), ConfigStr),
'remote': (re.compile(r'^(?P<uri>(?P<scheme>(https?|ftps?)://)(?P<path>.*))\s*$'), RemoteFile),
'local': (re.compile(r'^(file://)?(?P<path>(/?[^/]+)+/?)$'), LocalFile)}


def getConfig(cfg_ref, validate = True, populate_defaults = True, xsd_path = None):
cfgobj = None
# This is kind of gross.
for configtype, (pattern, configClass) in detector.items():
try:
if pattern.search(cfg_ref):
cfgobj = configClass(cfg_ref, xsd_path = xsd_path)
_logger.info('Config detected as {0}.'.format(configtype))
break
except TypeError:
ptrn = re.compile(detector['raw'][0].pattern.encode('utf-8'))
if not ptrn.search(cfg_ref):
_logger.error('Could not detect which configuration type was passed.')
raise ValueError('Unexpected/unparseable cfg_ref.')
else:
_logger.info('Config detected as ConfigBin.')
cfgobj = ConfigBin(cfg_ref, xsd_path = xsd_path)
break
if cfgobj:
_logger.info('Parsing configuration.')
cfgobj.main(validate = validate, populate_defaults = populate_defaults)
return(cfgobj)

21
aif/constants.py Normal file
View File

@ -0,0 +1,21 @@
from .constants_fallback import *
##
# This creates a conflict of imports, unfortunately.
# So we end up doing the same thing in aif/disk/(__init__.py => _common.py)... C'est la vie.
# Patches welcome.
# import aif.disk._common
# _BlockDev = aif.disk._common.BlockDev
# aif.disk._common.addBDPlugin('part')
import gi
gi.require_version('BlockDev', '2.0')
from gi.repository import BlockDev as _BlockDev
from gi.repository import GLib
_BlockDev.ensure_init(_BlockDev.plugin_specs_from_names(('part', )))


# LIBBLOCKDEV FLAG INDEXING / PARTED <=> LIBBLOCKDEV FLAG CONVERSION
BD_PART_FLAGS = _BlockDev.PartFlag(-1)
BD_PART_FLAGS_FRIENDLY = dict(zip(BD_PART_FLAGS.value_nicks, BD_PART_FLAGS.value_names))
PARTED_BD_MAP = {v: k for k, v in BD_PARTED_MAP.items() if v is not None}
BD_PART_FLAGS_IDX_FLAG = {k: v.value_nicks[0] for k, v in BD_PART_FLAGS.__flags_values__.items()}
BD_PART_FLAGS_FLAG_IDX = {v: k for k, v in BD_PART_FLAGS_IDX_FLAG.items()}

293
aif/constants_fallback.py Normal file
View File

@ -0,0 +1,293 @@
import hashlib
import re
import subprocess # I wish there was a better way to get the supported LUKS ciphers.
import uuid
##
import parted # https://www.gnu.org/software/parted/api/index.html

# META
ARCH_RELENG_KEY = '4AA4767BBC9C4B1D18AE28B77F2D434B9741E8AC'
VERSION = '0.2.0'
# blkinfo, mdstat, and pyparted are only needed for the non-gi fallbacks.
EXTERNAL_DEPS = ['blkinfo',
'gpg',
'jinja2',
'lxml',
'mdstat',
'parse',
'passlib',
'psutil',
'pyparted',
'pyroute2',
'pytz',
'requests',
'validators']
DEFAULT_LOGFILE = '/var/log/aif.log'
# PARTED FLAG INDEXING
PARTED_FSTYPES = sorted(list(dict(vars(parted.filesystem))['fileSystemType'].keys()))
PARTED_FSTYPES_GUIDS = {'affs0': uuid.UUID(hex = '0FC63DAF-8483-4772-8E79-3D69D8477DE4'),
'affs1': uuid.UUID(hex = '0FC63DAF-8483-4772-8E79-3D69D8477DE4'),
'affs2': uuid.UUID(hex = '0FC63DAF-8483-4772-8E79-3D69D8477DE4'),
'affs3': uuid.UUID(hex = '0FC63DAF-8483-4772-8E79-3D69D8477DE4'),
'affs4': uuid.UUID(hex = '0FC63DAF-8483-4772-8E79-3D69D8477DE4'),
'affs5': uuid.UUID(hex = '0FC63DAF-8483-4772-8E79-3D69D8477DE4'),
'affs6': uuid.UUID(hex = '0FC63DAF-8483-4772-8E79-3D69D8477DE4'),
'affs7': uuid.UUID(hex = '0FC63DAF-8483-4772-8E79-3D69D8477DE4'),
'amufs': uuid.UUID(hex = '0FC63DAF-8483-4772-8E79-3D69D8477DE4'),
'amufs0': uuid.UUID(hex = '0FC63DAF-8483-4772-8E79-3D69D8477DE4'),
'amufs1': uuid.UUID(hex = '0FC63DAF-8483-4772-8E79-3D69D8477DE4'),
'amufs2': uuid.UUID(hex = '0FC63DAF-8483-4772-8E79-3D69D8477DE4'),
'amufs3': uuid.UUID(hex = '0FC63DAF-8483-4772-8E79-3D69D8477DE4'),
'amufs4': uuid.UUID(hex = '0FC63DAF-8483-4772-8E79-3D69D8477DE4'),
'amufs5': uuid.UUID(hex = '0FC63DAF-8483-4772-8E79-3D69D8477DE4'),
'apfs1': uuid.UUID(hex = '0FC63DAF-8483-4772-8E79-3D69D8477DE4'),
'apfs2': uuid.UUID(hex = '0FC63DAF-8483-4772-8E79-3D69D8477DE4'),
'asfs': uuid.UUID(hex = '0FC63DAF-8483-4772-8E79-3D69D8477DE4'),
'btrfs': uuid.UUID(hex = '0FC63DAF-8483-4772-8E79-3D69D8477DE4'),
'ext2': uuid.UUID(hex = '0FC63DAF-8483-4772-8E79-3D69D8477DE4'),
'ext3': uuid.UUID(hex = '0FC63DAF-8483-4772-8E79-3D69D8477DE4'),
'ext4': uuid.UUID(hex = '0FC63DAF-8483-4772-8E79-3D69D8477DE4'),
'fat16': uuid.UUID(hex = 'EBD0A0A2-B9E5-4433-87C0-68B6B72699C7'),
'fat32': uuid.UUID(hex = 'EBD0A0A2-B9E5-4433-87C0-68B6B72699C7'),
'hfs': uuid.UUID(hex = '48465300-0000-11AA-AA11-00306543ECAC'),
'hfs+': uuid.UUID(hex = '48465300-0000-11AA-AA11-00306543ECAC'),
'hfsx': uuid.UUID(hex = '48465300-0000-11AA-AA11-00306543ECAC'),
'hp-ufs': uuid.UUID(hex = '0FC63DAF-8483-4772-8E79-3D69D8477DE4'),
'jfs': uuid.UUID(hex = '0FC63DAF-8483-4772-8E79-3D69D8477DE4'),
'linux-swap(v0)': uuid.UUID(hex = '0657FD6D-A4AB-43C4-84E5-0933C84B4F4F'),
'linux-swap(v1)': uuid.UUID(hex = '0657FD6D-A4AB-43C4-84E5-0933C84B4F4F'),
'nilfs2': uuid.UUID(hex = '0FC63DAF-8483-4772-8E79-3D69D8477DE4'),
'ntfs': uuid.UUID(hex = 'EBD0A0A2-B9E5-4433-87C0-68B6B72699C7'),
'reiserfs': uuid.UUID(hex = '0FC63DAF-8483-4772-8E79-3D69D8477DE4'),
'sun-ufs': uuid.UUID(hex = '0FC63DAF-8483-4772-8E79-3D69D8477DE4'),
'swsusp': uuid.UUID(hex = '0FC63DAF-8483-4772-8E79-3D69D8477DE4'),
'udf': uuid.UUID(hex = 'EBD0A0A2-B9E5-4433-87C0-68B6B72699C7'),
'xfs': uuid.UUID(hex = '0FC63DAF-8483-4772-8E79-3D69D8477DE4')}
PARTED_FLAGS = sorted(list(parted.partition.partitionFlag.values()))
PARTED_IDX_FLAG = dict(parted.partition.partitionFlag)
PARTED_FLAG_IDX = {v: k for k, v in PARTED_IDX_FLAG.items()}
# LIBBLOCKDEV BOOTSTRAPPING (ALLOWED VALUES IN CONFIG)
# https://en.wikipedia.org/wiki/GUID_Partition_Table#Partition_entries_(LBA_2%E2%80%9333)
BD_PARTED_MAP = {'apple_tv_recovery': 'atvrecv',
'cpalo': 'palo',
'gpt_hidden': None, # No parted equivalent
'gpt_no_automount': None, # No parted equivalent
'gpt_read_only': None, # No parted equivalent
'gpt_system_part': None, # No parted equivalent
'hpservice': 'hp-service',
'msft_data': 'msftdata',
'msft_reserved': 'msftres'}
# GPT FSTYPES GUIDS
# I'm doing this now because if I didn't, I would probably need to do it later eventually.
# https://en.wikipedia.org/wiki/GUID_Partition_Table#Partition_type_GUIDs
GPT_FSTYPE_GUIDS = ((1, 'EFI System', uuid.UUID(hex = 'C12A7328-F81F-11D2-BA4B-00A0C93EC93B')),
(2, 'MBR partition scheme', uuid.UUID(hex = '024DEE41-33E7-11D3-9D69-0008C781F39F')),
(3, 'Intel Fast Flash', uuid.UUID(hex = 'D3BFE2DE-3DAF-11DF-BA40-E3A556D89593')),
(4, 'BIOS boot', uuid.UUID(hex = '21686148-6449-6E6F-744E-656564454649')),
(5, 'Sony boot partition', uuid.UUID(hex = 'F4019732-066E-4E12-8273-346C5641494F')),
(6, 'Lenovo boot partition', uuid.UUID(hex = 'BFBFAFE7-A34F-448A-9A5B-6213EB736C22')),
(7, 'PowerPC PReP boot', uuid.UUID(hex = '9E1A2D38-C612-4316-AA26-8B49521E5A8B')),
(8, 'ONIE boot', uuid.UUID(hex = '7412F7D5-A156-4B13-81DC-867174929325')),
(9, 'ONIE config', uuid.UUID(hex = 'D4E6E2CD-4469-46F3-B5CB-1BFF57AFC149')),
(10, 'Microsoft reserved', uuid.UUID(hex = 'E3C9E316-0B5C-4DB8-817D-F92DF00215AE')),
(11, 'Microsoft basic data', uuid.UUID(hex = 'EBD0A0A2-B9E5-4433-87C0-68B6B72699C7')),
(12, 'Microsoft LDM metadata', uuid.UUID(hex = '5808C8AA-7E8F-42E0-85D2-E1E90434CFB3')),
(13, 'Microsoft LDM data', uuid.UUID(hex = 'AF9B60A0-1431-4F62-BC68-3311714A69AD')),
(14, 'Windows recovery environment', uuid.UUID(hex = 'DE94BBA4-06D1-4D40-A16A-BFD50179D6AC')),
(15, 'IBM General Parallel Fs', uuid.UUID(hex = '37AFFC90-EF7D-4E96-91C3-2D7AE055B174')),
(16, 'Microsoft Storage Spaces', uuid.UUID(hex = 'E75CAF8F-F680-4CEE-AFA3-B001E56EFC2D')),
(17, 'HP-UX data', uuid.UUID(hex = '75894C1E-3AEB-11D3-B7C1-7B03A0000000')),
(18, 'HP-UX service', uuid.UUID(hex = 'E2A1E728-32E3-11D6-A682-7B03A0000000')),
(19, 'Linux swap', uuid.UUID(hex = '0657FD6D-A4AB-43C4-84E5-0933C84B4F4F')),
(20, 'Linux filesystem', uuid.UUID(hex = '0FC63DAF-8483-4772-8E79-3D69D8477DE4')),
(21, 'Linux server data', uuid.UUID(hex = '3B8F8425-20E0-4F3B-907F-1A25A76F98E8')),
(22, 'Linux root (x86)', uuid.UUID(hex = '44479540-F297-41B2-9AF7-D131D5F0458A')),
(23, 'Linux root (ARM)', uuid.UUID(hex = '69DAD710-2CE4-4E3C-B16C-21A1D49ABED3')),
(24, 'Linux root (x86-64)', uuid.UUID(hex = '4F68BCE3-E8CD-4DB1-96E7-FBCAF984B709')),
(25, 'Linux root (ARM-64)', uuid.UUID(hex = 'B921B045-1DF0-41C3-AF44-4C6F280D3FAE')),
(26, 'Linux root (IA-64)', uuid.UUID(hex = '993D8D3D-F80E-4225-855A-9DAF8ED7EA97')),
(27, 'Linux reserved', uuid.UUID(hex = '8DA63339-0007-60C0-C436-083AC8230908')),
(28, 'Linux home', uuid.UUID(hex = '933AC7E1-2EB4-4F13-B844-0E14E2AEF915')),
(29, 'Linux RAID', uuid.UUID(hex = 'A19D880F-05FC-4D3B-A006-743F0F84911E')),
(30, 'Linux extended boot', uuid.UUID(hex = 'BC13C2FF-59E6-4262-A352-B275FD6F7172')),
(31, 'Linux LVM', uuid.UUID(hex = 'E6D6D379-F507-44C2-A23C-238F2A3DF928')),
(32, 'FreeBSD data', uuid.UUID(hex = '516E7CB4-6ECF-11D6-8FF8-00022D09712B')),
(33, 'FreeBSD boot', uuid.UUID(hex = '83BD6B9D-7F41-11DC-BE0B-001560B84F0F')),
(34, 'FreeBSD swap', uuid.UUID(hex = '516E7CB5-6ECF-11D6-8FF8-00022D09712B')),
(35, 'FreeBSD UFS', uuid.UUID(hex = '516E7CB6-6ECF-11D6-8FF8-00022D09712B')),
(36, 'FreeBSD ZFS', uuid.UUID(hex = '516E7CBA-6ECF-11D6-8FF8-00022D09712B')),
(37, 'FreeBSD Vinum', uuid.UUID(hex = '516E7CB8-6ECF-11D6-8FF8-00022D09712B')),
(38, 'Apple HFS/HFS+', uuid.UUID(hex = '48465300-0000-11AA-AA11-00306543ECAC')),
(39, 'Apple UFS', uuid.UUID(hex = '55465300-0000-11AA-AA11-00306543ECAC')),
(40, 'Apple RAID', uuid.UUID(hex = '52414944-0000-11AA-AA11-00306543ECAC')),
(41, 'Apple RAID offline', uuid.UUID(hex = '52414944-5F4F-11AA-AA11-00306543ECAC')),
(42, 'Apple boot', uuid.UUID(hex = '426F6F74-0000-11AA-AA11-00306543ECAC')),
(43, 'Apple label', uuid.UUID(hex = '4C616265-6C00-11AA-AA11-00306543ECAC')),
(44, 'Apple TV recovery', uuid.UUID(hex = '5265636F-7665-11AA-AA11-00306543ECAC')),
(45, 'Apple Core storage', uuid.UUID(hex = '53746F72-6167-11AA-AA11-00306543ECAC')),
(46, 'Solaris boot', uuid.UUID(hex = '6A82CB45-1DD2-11B2-99A6-080020736631')),
(47, 'Solaris root', uuid.UUID(hex = '6A85CF4D-1DD2-11B2-99A6-080020736631')),
(48, 'Solaris /usr & Apple ZFS', uuid.UUID(hex = '6A898CC3-1DD2-11B2-99A6-080020736631')),
(49, 'Solaris swap', uuid.UUID(hex = '6A87C46F-1DD2-11B2-99A6-080020736631')),
(50, 'Solaris backup', uuid.UUID(hex = '6A8B642B-1DD2-11B2-99A6-080020736631')),
(51, 'Solaris /var', uuid.UUID(hex = '6A8EF2E9-1DD2-11B2-99A6-080020736631')),
(52, 'Solaris /home', uuid.UUID(hex = '6A90BA39-1DD2-11B2-99A6-080020736631')),
(53, 'Solaris alternate sector', uuid.UUID(hex = '6A9283A5-1DD2-11B2-99A6-080020736631')),
(54, 'Solaris reserved 1', uuid.UUID(hex = '6A945A3B-1DD2-11B2-99A6-080020736631')),
(55, 'Solaris reserved 2', uuid.UUID(hex = '6A9630D1-1DD2-11B2-99A6-080020736631')),
(56, 'Solaris reserved 3', uuid.UUID(hex = '6A980767-1DD2-11B2-99A6-080020736631')),
(57, 'Solaris reserved 4', uuid.UUID(hex = '6A96237F-1DD2-11B2-99A6-080020736631')),
(58, 'Solaris reserved 5', uuid.UUID(hex = '6A8D2AC7-1DD2-11B2-99A6-080020736631')),
(59, 'NetBSD swap', uuid.UUID(hex = '49F48D32-B10E-11DC-B99B-0019D1879648')),
(60, 'NetBSD FFS', uuid.UUID(hex = '49F48D5A-B10E-11DC-B99B-0019D1879648')),
(61, 'NetBSD LFS', uuid.UUID(hex = '49F48D82-B10E-11DC-B99B-0019D1879648')),
(62, 'NetBSD concatenated', uuid.UUID(hex = '2DB519C4-B10E-11DC-B99B-0019D1879648')),
(63, 'NetBSD encrypted', uuid.UUID(hex = '2DB519EC-B10E-11DC-B99B-0019D1879648')),
(64, 'NetBSD RAID', uuid.UUID(hex = '49F48DAA-B10E-11DC-B99B-0019D1879648')),
(65, 'ChromeOS kernel', uuid.UUID(hex = 'FE3A2A5D-4F32-41A7-B725-ACCC3285A309')),
(66, 'ChromeOS root fs', uuid.UUID(hex = '3CB8E202-3B7E-47DD-8A3C-7FF2A13CFCEC')),
(67, 'ChromeOS reserved', uuid.UUID(hex = '2E0A753D-9E48-43B0-8337-B15192CB1B5E')),
(68, 'MidnightBSD data', uuid.UUID(hex = '85D5E45A-237C-11E1-B4B3-E89A8F7FC3A7')),
(69, 'MidnightBSD boot', uuid.UUID(hex = '85D5E45E-237C-11E1-B4B3-E89A8F7FC3A7')),
(70, 'MidnightBSD swap', uuid.UUID(hex = '85D5E45B-237C-11E1-B4B3-E89A8F7FC3A7')),
(71, 'MidnightBSD UFS', uuid.UUID(hex = '0394EF8B-237E-11E1-B4B3-E89A8F7FC3A7')),
(72, 'MidnightBSD ZFS', uuid.UUID(hex = '85D5E45D-237C-11E1-B4B3-E89A8F7FC3A7')),
(73, 'MidnightBSD Vinum', uuid.UUID(hex = '85D5E45C-237C-11E1-B4B3-E89A8F7FC3A7')),
(74, 'Ceph Journal', uuid.UUID(hex = '45B0969E-9B03-4F30-B4C6-B4B80CEFF106')),
(75, 'Ceph Encrypted Journal', uuid.UUID(hex = '45B0969E-9B03-4F30-B4C6-5EC00CEFF106')),
(76, 'Ceph OSD', uuid.UUID(hex = '4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D')),
(77, 'Ceph crypt OSD', uuid.UUID(hex = '4FBD7E29-9D25-41B8-AFD0-5EC00CEFF05D')),
(78, 'Ceph disk in creation', uuid.UUID(hex = '89C57F98-2FE5-4DC0-89C1-F3AD0CEFF2BE')),
(79, 'Ceph crypt disk in creation', uuid.UUID(hex = '89C57F98-2FE5-4DC0-89C1-5EC00CEFF2BE')),
(80, 'VMware VMFS', uuid.UUID(hex = 'AA31E02A-400F-11DB-9590-000C2911D1B8')),
(81, 'VMware Diagnostic', uuid.UUID(hex = '9D275380-40AD-11DB-BF97-000C2911D1B8')),
(82, 'VMware Virtual SAN', uuid.UUID(hex = '381CFCCC-7288-11E0-92EE-000C2911D0B2')),
(83, 'VMware Virsto', uuid.UUID(hex = '77719A0C-A4A0-11E3-A47E-000C29745A24')),
(84, 'VMware Reserved', uuid.UUID(hex = '9198EFFC-31C0-11DB-8F78-000C2911D1B8')),
(85, 'OpenBSD data', uuid.UUID(hex = '824CC7A0-36A8-11E3-890A-952519AD3F61')),
(86, 'QNX6 file system', uuid.UUID(hex = 'CEF5A9AD-73BC-4601-89F3-CDEEEEE321A1')),
(87, 'Plan 9 partition', uuid.UUID(hex = 'C91818F9-8025-47AF-89D2-F030D7000C2C')),
(88, 'HiFive Unleashed FSBL', uuid.UUID(hex = '5B193300-FC78-40CD-8002-E86C45580B47')),
(89, 'HiFive Unleashed BBL', uuid.UUID(hex = '2E54B353-1271-4842-806F-E436D6AF6985')))
GPT_GUID_IDX = {k[2]: k[0] for k in GPT_FSTYPE_GUIDS}
# MSDOS FSTYPES IDENTIFIERS
# Second verse, same as the first - kind of. The msdos type identifers just use a byte identifier rather than UUID.
# https://git.kernel.org/pub/scm/utils/util-linux/util-linux.git/plain/include/pt-mbr-partnames.h
MSDOS_FSTYPE_IDS = ((1, 'Empty', b'\x00'),
(2, 'FAT12', b'\x01'),
(3, 'XENIX root', b'\x02'),
(4, 'XENIX usr', b'\x03'),
(5, 'FAT16 <32M', b'\x04'),
(6, 'Extended', b'\x05'),
(7, 'FAT16', b'\x06'),
(8, 'HPFS/NTFS/exFAT', b'\x07'),
(9, 'AIX', b'\x08'),
(10, 'AIX bootable', b'\t'), # \x09
(11, 'OS/2 Boot Manager', b'\n'), # \x0A
(12, 'W95 FAT32', b'\x0B'),
(13, 'W95 FAT32 (LBA)', b'\x0C'),
(14, 'W95 FAT16 (LBA)', b'\x0E'),
(15, "W95 Ext'd (LBA)", b'\x0F'),
(16, 'OPUS', b'\x10'),
(17, 'Hidden FAT12', b'\x11'),
(18, 'Compaq diagnostics', b'\x12'),
(19, 'Hidden FAT16 <32M', b'\x14'),
(20, 'Hidden FAT16', b'\x16'),
(21, 'Hidden HPFS/NTFS', b'\x17'),
(22, 'AST SmartSleep', b'\x18'),
(23, 'Hidden W95 FAT32', b'\x1B'),
(24, 'Hidden W95 FAT32 (LBA)', b'\x1C'),
(25, 'Hidden W95 FAT16 (LBA)', b'\x1E'),
(26, 'NEC DOS', b'$'), # \x24
(27, 'Hidden NTFS WinRE', b"'"), # \x27
(28, 'Plan 9', b'9'), # \x39
(29, 'PartitionMagic recovery', b'<'), # \x3C
(30, 'Venix 80286', b'@'), # \x40
(31, 'PPC PReP Boot', b'A'), # \x41
(32, 'SFS', b'B'), # \x42
(33, 'QNX4.x', b'M'), # \x4D
(34, 'QNX4.x 2nd part', b'N'), # \x4E
(35, 'QNX4.x 3rd part', b'O'), # \x4F
(36, 'OnTrack DM', b'P'), # \x50
(37, 'OnTrack DM6 Aux1', b'Q'), # \x51
(38, 'CP/M', b'R'), # \x52
(39, 'OnTrack DM6 Aux3', b'S'), # \x53
(40, 'OnTrackDM6', b'T'), # \x54
(41, 'EZ-Drive', b'U'), # \x55
(42, 'Golden Bow', b'V'), # \x56
(43, 'Priam Edisk', b'\\'), # \x5C
(44, 'SpeedStor', b'a'), # \x61
(45, 'GNU HURD or SysV', b'c'), # \x63
(46, 'Novell Netware 286', b'd'), # \x64
(47, 'Novell Netware 386', b'e'), # \x65
(48, 'DiskSecure Multi-Boot', b'p'), # \x70
(49, 'PC/IX', b'u'), # \x75
(50, 'Old Minix', b'\x80'),
(51, 'Minix / old Linux', b'\x81'),
(52, 'Linux swap / Solaris', b'\x82'),
(53, 'Linux', b'\x83'),
(54, 'OS/2 hidden or Intel hibernation', b'\x84'),
(55, 'Linux extended', b'\x85'),
(56, 'NTFS volume set', b'\x86'),
(57, 'NTFS volume set', b'\x87'),
(58, 'Linux plaintext', b'\x88'),
(59, 'Linux LVM', b'\x8E'),
(60, 'Amoeba', b'\x93'),
(61, 'Amoeba BBT', b'\x94'),
(62, 'BSD/OS', b'\x9F'),
(63, 'IBM Thinkpad hibernation', b'\xA0'),
(64, 'FreeBSD', b'\xA5'),
(65, 'OpenBSD', b'\xA6'),
(66, 'NeXTSTEP', b'\xA7'),
(67, 'Darwin UFS', b'\xA8'),
(68, 'NetBSD', b'\xA9'),
(69, 'Darwin boot', b'\xAB'),
(70, 'HFS / HFS+', b'\xAF'),
(71, 'BSDI fs', b'\xB7'),
(72, 'BSDI swap', b'\xB8'),
(73, 'Boot Wizard hidden', b'\xBB'),
(74, 'Acronis FAT32 LBA', b'\xBC'),
(75, 'Solaris boot', b'\xBE'),
(76, 'Solaris', b'\xBF'),
(77, 'DRDOS/sec (FAT-12)', b'\xC1'),
(78, 'DRDOS/sec (FAT-16 < 32M)', b'\xC4'),
(79, 'DRDOS/sec (FAT-16)', b'\xC6'),
(80, 'Syrinx', b'\xC7'),
(81, 'Non-FS data', b'\xDA'),
(82, 'CP/M / CTOS / ...', b'\xDB'),
(83, 'Dell Utility', b'\xDE'),
(84, 'BootIt', b'\xDF'),
(85, 'DOS access', b'\xE1'),
(86, 'DOS R/O', b'\xE3'),
(87, 'SpeedStor', b'\xE4'),
(88, 'Rufus alignment', b'\xEA'),
(89, 'BeOS fs', b'\xEB'),
(90, 'GPT', b'\xEE'),
(91, 'EFI (FAT-12/16/32)', b'\xEF'),
(92, 'Linux/PA-RISC boot', b'\xF0'),
(93, 'SpeedStor', b'\xF1'),
(94, 'SpeedStor', b'\xF4'),
(95, 'DOS secondary', b'\xF2'),
(96, 'VMware VMFS', b'\xFB'),
(97, 'VMware VMKCORE', b'\xFC'),
(98, 'Linux raid autodetect', b'\xFD'),
(99, 'LANstep', b'\xFE'),
(100, 'BBT', b'\xFF'))
MDADM_SUPPORTED_LEVELS = (0, 1, 4, 5, 6, 10)
MDADM_SUPPORTED_METADATA = ('0', '0.90', '1', '1.0', '1.1', '1.2', 'default', 'ddf', 'imsm')
MDADM_SUPPORTED_LAYOUTS = {5: (re.compile(r'^((left|right)-a?symmetric|[lr][as]|'
r'parity-(fir|la)st|'
r'ddf-(N|zero)-restart|ddf-N-continue)$'),
'left-symmetric'),
6: (re.compile(r'^((left|right)-a?symmetric(-6)?|[lr][as]|'
r'parity-(fir|la)st|'
r'ddf-(N|zero)-restart|ddf-N-continue|'
r'parity-first-6)$'),
None),
10: (re.compile(r'^[nof][0-9]+$'),
None)}
# glibc doesn't support bcrypt/blowfish nor des (nor any of the others, like e.g. scrypt)
CRYPT_SUPPORTED_HASHTYPES = ('sha512', 'sha256', 'md5')
HASH_BUILTIN_SUPPORTED_TYPES = tuple(sorted(list(hashlib.algorithms_available)))
HASH_EXTRA_SUPPORTED_TYPES = set(('adler32', 'crc32'))
HASH_SUPPORTED_TYPES = tuple(sorted(list(hashlib.algorithms_available.union(HASH_EXTRA_SUPPORTED_TYPES))))

31
aif/disk/__init__.py Normal file
View File

@ -0,0 +1,31 @@
try:
from . import _common
except ImportError:
pass # GI isn't supported, so we don't even use a fallback.

try:
from . import block
except ImportError:
from . import block_fallback as block

try:
from . import filesystem
except ImportError:
from . import filesystem_fallback as filesystem

try:
from . import luks
except ImportError:
from . import luks_fallback as luks

try:
from . import lvm
except ImportError:
from . import lvm_fallback as lvm

try:
from . import mdadm
except ImportError:
from . import mdadm_fallback as mdadm

from . import main

20
aif/disk/_common.py Normal file
View File

@ -0,0 +1,20 @@
import logging
##
import gi
gi.require_version('BlockDev', '2.0')
from gi.repository import BlockDev, GLib

BlockDev.ensure_init([None])

_logger = logging.getLogger('disk:_common')


def addBDPlugin(plugin_name):
_logger.info('Enabling plugin: {0}'.format(plugin_name))
plugins = BlockDev.get_available_plugin_names()
plugins.append(plugin_name)
plugins = list(set(plugins)) # Deduplicate
_logger.debug('Currently loaded plugins: {0}'.format(','.join(plugins)))
spec = BlockDev.plugin_specs_from_names(plugins)
_logger.debug('Plugin {0} loaded.'.format(plugin_name))
return(BlockDev.ensure_init(spec))

239
aif/disk/block.py Normal file
View File

@ -0,0 +1,239 @@
import logging
import os
import uuid
##
import blkinfo
# import psutil # Do I need this if I can have libblockdev's mounts API? Is there a way to get current mounts?
from lxml import etree
##
import aif.constants
import aif.utils
from . import _common


_BlockDev = _common.BlockDev
_logger = logging.getLogger(__name__)


class Disk(object):
def __init__(self, disk_xml):
self.xml = disk_xml
_logger.debug('disk_xml: {0}'.format(etree.tostring(self.xml, with_tail = False).decode('utf-8')))
self.devpath = os.path.realpath(self.xml.attrib['device'])
aif.disk._common.addBDPlugin('part')
self.is_lowformatted = None
self.is_hiformatted = None
self.is_partitioned = None
self.partitions = None
self._initDisk()

def _initDisk(self):
if self.devpath == 'auto':
self.devpath = '/dev/{0}'.format(blkinfo.BlkDiskInfo().get_disks()[0]['kname'])
if not os.path.isfile(self.devpath):
_logger.error('Disk {0} does not exist; please specify an explicit device path'.format(self.devpath))
raise ValueError('Disk not found')
self.table_type = self.xml.attrib.get('diskFormat', 'gpt').lower()
if self.table_type in ('bios', 'mbr', 'dos', 'msdos'):
_logger.debug('Disk format set to MSDOS.')
self.table_type = _BlockDev.PartTableType.MSDOS
elif self.table_type == 'gpt':
self.table_type = _BlockDev.PartTableType.GPT
_logger.debug('Disk format set to GPT.')
else:
_logger.error('Disk format {0} is invalid for this system\'s architecture; must be gpt or msdos')
raise ValueError('Invalid disk format')
self.device = self.disk = _BlockDev.part.get_disk_spec(self.devpath)
self.is_lowformatted = False
self.is_hiformatted = False
self.is_partitioned = False
self.partitions = []
return(None)

def diskFormat(self):
if self.is_lowformatted:
return(None)
# This is a safeguard. We do *not* want to low-format a disk that is mounted.
aif.utils.checkMounted(self.devpath)
# TODO: BlockDev.part.set_disk_flag(<disk>,
# BlockDev.PartDiskFlag(1),
# True) ??
# https://lazka.github.io/pgi-docs/BlockDev-2.0/enums.html#BlockDev.PartDiskFlag
# https://unix.stackexchange.com/questions/325886/bios-gpt-do-we-need-a-boot-flag
_BlockDev.part.create_table(self.devpath, self.table_type, True)
self.is_lowformatted = True
self.is_partitioned = False
return(None)

def getPartitions(self):
# For GPT, this *technically* should be 34 -- or, more precisely, 2048 (see FAQ in manual), but the alignment
# optimizer fixes it for us automatically.
# But for DOS tables, it's required.
_logger.info('Establishing partitions for {0}'.format(self.devpath))
if self.table_type == 'msdos':
start_sector = 2048
else:
start_sector = 0
self.partitions = []
xml_partitions = self.xml.findall('part')
for idx, part in enumerate(xml_partitions):
partnum = idx + 1
if self.table_type == 'gpt':
p = Partition(part, self.disk, start_sector, partnum, self.table_type)
else:
parttype = 'primary'
if len(xml_partitions) > 4:
if partnum == 4:
parttype = 'extended'
elif partnum > 4:
parttype = 'logical'
p = Partition(part, self.disk, start_sector, partnum, self.table_type, part_type = parttype)
start_sector = p.end + 1
self.partitions.append(p)
_logger.debug('Added partition {0}'.format(p.id))
return(None)

def partFormat(self):
if self.is_partitioned:
return(None)
if not self.is_lowformatted:
self.diskFormat()
# This is a safeguard. We do *not* want to partition a disk that is mounted.
aif.utils.checkMounted(self.devpath)
if not self.partitions:
self.getPartitions()
if not self.partitions:
return(None)
for p in self.partitions:
p.format()
p.is_hiformatted = True
self.is_partitioned = True
return(None)


class Partition(object):
def __init__(self, part_xml, diskobj, start_sector, partnum, tbltype, part_type = None):
# Belive it or not, dear reader, but this *entire method* is just to set attributes.
if tbltype not in ('gpt', 'msdos'):
_logger.error('Invalid tabletype specified: {0}. Must be one of: gpt,msdos.'.format(tbltype))
raise ValueError('Invalid tbltype.')
if tbltype == 'msdos' and part_type not in ('primary', 'extended', 'logical'):
_logger.error(('Table type msdos requires the part_type to be specified and must be one of: primary,'
'extended,logical (instead of: {0}).').format(part_type))
raise ValueError('The part_type must be specified for msdos tables')
aif.disk._common.addBDPlugin('part')
self.xml = part_xml
_logger.debug('part_xml: {0}'.format(etree.tostring(self.xml, with_tail = False).decode('utf-8')))
_logger.debug('Partition number: {0}'.format(partnum))
self.id = self.xml.attrib['id']
self.table_type = getattr(_BlockDev.PartTableType, tbltype.upper())
_logger.debug('Partition table type: {0}.'.format(tbltype))
if tbltype == 'msdos':
# Could technically be _BlockDev.PartTypeReq.NEXT BUT that doesn't *quite* work
# with this project's structure.
if part_type == 'primary':
self.part_type = _BlockDev.PartTypeReq.NORMAL
elif part_type == 'extended':
self.part_type = _BlockDev.PartTypeReq.EXTENDED
elif part_type == 'logical':
self.part_type = _BlockDev.PartTypeReq.LOGICAL
elif tbltype == 'gpt':
self.part_type = _BlockDev.PartTypeReq.NORMAL
self.flags = []
self.partnum = partnum
self.fs_type = self.xml.attrib['fsType']
self.disk = diskobj
self.device = self.disk.path
self.devpath = '{0}{1}'.format(self.device, self.partnum)
_logger.debug('Assigned to disk: {0} ({1}) at path {2}'.format(self.disk.id, self.device, self.devpath))
self.is_hiformatted = False
sizes = {}
for s in ('start', 'stop'):
x = dict(zip(('from_bgn', 'size', 'type'),
aif.utils.convertSizeUnit(self.xml.attrib[s])))
sectors = x['size']
if x['type'] == '%':
sectors = int(int(self.disk.size / self.disk.sector_size) * (0.01 * x['size']))
else:
sectors = int(aif.utils.size.convertStorage(x['size'],
x['type'],
target = 'B') / self.disk.sector_size)
sizes[s] = (sectors, x['from_bgn'])
if sizes['start'][1] is not None:
if sizes['start'][1]:
self.begin = sizes['start'][0] + 0
else:
self.begin = int(self.disk.size / self.disk.sector_size) - sizes['start'][0]
else:
self.begin = sizes['start'][0] + start_sector
if sizes['stop'][1] is not None:
if sizes['stop'][1]:
self.end = sizes['stop'][0] + 0
else:
# This *technically* should be - 34, at least for gpt, but the alignment optimizer fixes it for us.
self.end = (int(self.disk.size / self.disk.sector_size) - 1) - sizes['stop'][0]
else:
self.end = self.begin + sizes['stop'][0]
self.size = (self.end - self.begin)
_logger.debug('Size: {0} sectors (sector {1} to {2}).'.format(self.size, self.begin, self.end))
self.part_name = self.xml.attrib.get('name')
_logger.debug('Partition name: {0}'.format(self.part_name))
self.partition = None
self._initFlags()
self._initFstype()

def _initFlags(self):
for f in self.xml.findall('partitionFlag'):
# *Technically* we could use e.g. getattr(_BlockDev.PartFlag, f.text.upper()), *but* we lose compat
# with parted's flags if we do that. :| So we do some funky logic both here and in the constants.
if f.text in aif.constants.PARTED_BD_MAP:
flag_id = aif.constants.BD_PART_FLAGS_FLAG_IDX[aif.constants.PARTED_BD_MAP[f.text]]
elif f.text in aif.constants.BD_PART_FLAGS_FRIENDLY:
flag_id = aif.constants.BD_PART_FLAGS_FLAG_IDX[aif.constants.BD_PART_FLAGS_FRIENDLY[f.text]]
else:
continue
self.flags.append(_BlockDev.PartFlag(flag_id))
_logger.debug('Partition flags: {0}'.format(','.join(self.flags)))
return(None)

def _initFstype(self):
if self.fs_type in aif.constants.PARTED_FSTYPES_GUIDS.keys():
self.fs_type = aif.constants.PARTED_FSTYPES_GUIDS[self.fs_type]
_logger.debug('Filesystem type (parted): {0}'.format(self.fs_type))
else:
try:
self.fs_type = uuid.UUID(hex = self.fs_type)
_logger.debug('Filesystem type (explicit GUID): {0}'.format(str(self.fs_type)))
except ValueError:
_logger.error('Partition type GUID {0} is not a valid UUID4 string'.format(self.fs_type))
raise ValueError('Invalid partition type GUID')
if self.fs_type not in aif.constants.GPT_GUID_IDX.keys():
_logger.error('Partition type GUID {0} is not a valid partition type'.format(self.fs_type))
raise ValueError('Invalid partition type value')
return(None)

def format(self):
_logger.info('Formatting partion {0}.'.format(self.id))
# This is a safeguard. We do *not* want to partition a disk that is mounted.
aif.utils.checkMounted(self.devpath)
_logger.info('Creating partition object.')
self.partition = _BlockDev.part.create_part(self.device,
self.part_type,
self.begin,
self.size,
_BlockDev.PartAlign.OPTIMAL)
_logger.debug('Partition object created.')
self.devpath = self.partition.path
_logger.debug('Partition path updated: {0}'.format(self.devpath))
_BlockDev.part.set_part_type(self.device, self.devpath, str(self.fs_type).upper())
if self.part_name:
_BlockDev.part.set_part_name(self.device, self.devpath, self.part_name)
if self.flags:
for f in self.flags:
_BlockDev.part.set_part_flag(self.device, self.devpath, f, True)
_logger.info('Partition {0} formatted.'.format(self.devpath))
return(None)

#
# def detect(self):
# pass # TODO; blkinfo?

221
aif/disk/block_fallback.py Normal file
View File

@ -0,0 +1,221 @@
# To reproduce sgdisk behaviour in v1 of AIF-NG:
# https://gist.github.com/herry13/5931cac426da99820de843477e41e89e
# https://github.com/dcantrell/pyparted/blob/master/examples/query_device_capacity.py
# TODO: Remember to replicate genfstab behaviour.

import logging
import os
try:
# https://stackoverflow.com/a/34812552/733214
# https://github.com/karelzak/util-linux/blob/master/libmount/python/test_mount_context.py#L6
import libmount as mount
except ImportError:
# We should never get here. util-linux is part of core (base) in Arch and uses "libmount".
import pylibmount as mount
##
import blkinfo
import parted # https://www.gnu.org/software/parted/api/index.html
from lxml import etree
##
import aif.constants
import aif.utils

# TODO: https://serverfault.com/questions/356534/ssd-erase-block-size-lvm-pv-on-raw-device-alignment
# TODO: caveats? https://gist.github.com/leodutra/8779d468e9062058a3e90008295d3ca6
# https://unix.stackexchange.com/questions/325886/bios-gpt-do-we-need-a-boot-flag


_logger = logging.getLogger(__name__)


class Disk(object):
def __init__(self, disk_xml):
self.xml = disk_xml
_logger.debug('disk_xml: {0}'.format(etree.tostring(self.xml, with_tail = False).decode('utf-8')))
self.id = self.xml.attrib['id']
self.devpath = os.path.realpath(self.xml.attrib['device'])
self.is_lowformatted = None
self.is_hiformatted = None
self.is_partitioned = None
self.partitions = None
self._initDisk()

def _initDisk(self):
if self.devpath == 'auto':
self.devpath = '/dev/{0}'.format(blkinfo.BlkDiskInfo().get_disks()[0]['kname'])
if not os.path.isfile(self.devpath):
_logger.error('Disk {0} does not exist; please specify an explicit device path'.format(self.devpath))
raise ValueError('Disk not found')
self.table_type = self.xml.attrib.get('diskFormat', 'gpt').lower()
if self.table_type in ('bios', 'mbr', 'dos'):
self.table_type = 'msdos'
validlabels = parted.getLabels()
if self.table_type not in validlabels:
_logger.error(('Disk format ({0}) is not valid for this architecture;'
'must be one of: {1}'.format(self.table_type, ', '.join(list(validlabels)))))
raise ValueError('Invalid disk format')
self.device = parted.getDevice(self.devpath)
self.disk = parted.freshDisk(self.device, self.table_type)
_logger.debug('Configured parted device for {0}.'.format(self.devpath))
self.is_lowformatted = False
self.is_hiformatted = False
self.is_partitioned = False
self.partitions = []
return(None)

def diskFormat(self):
if self.is_lowformatted:
return(None)
# This is a safeguard. We do *not* want to low-format a disk that is mounted.
aif.utils.checkMounted(self.devpath)
self.disk.deleteAllPartitions()
self.disk.commit()
self.is_lowformatted = True
self.is_partitioned = False
return(None)

def getPartitions(self):
# For GPT, this *technically* should be 34 -- or, more precisely, 2048 (see FAQ in manual), but the alignment
# optimizer fixes it for us automatically.
# But for DOS tables, it's required.
_logger.info('Establishing partitions for {0}'.format(self.devpath))
if self.table_type == 'msdos':
start_sector = 2048
else:
start_sector = 0
self.partitions = []
xml_partitions = self.xml.findall('part')
for idx, part in enumerate(xml_partitions):
partnum = idx + 1
if self.table_type == 'gpt':
p = Partition(part, self.disk, start_sector, partnum, self.table_type)
else:
parttype = 'primary'
if len(xml_partitions) > 4:
if partnum == 4:
parttype = 'extended'
elif partnum > 4:
parttype = 'logical'
p = Partition(part, self.disk, start_sector, partnum, self.table_type, part_type = parttype)
start_sector = p.end + 1
self.partitions.append(p)
_logger.debug('Added partition {0}'.format(p.id))
return(None)

def partFormat(self):
if self.is_partitioned:
return(None)
if not self.is_lowformatted:
self.diskFormat()
# This is a safeguard. We do *not* want to partition a disk that is mounted.
aif.utils.checkMounted(self.devpath)
if not self.partitions:
self.getPartitions()
if not self.partitions:
return(None)
for p in self.partitions:
self.disk.addPartition(partition = p, constraint = self.device.optimalAlignedConstraint)
self.disk.commit()
p.devpath = p.partition.path
p.is_hiformatted = True
self.is_partitioned = True
return(None)


class Partition(object):
def __init__(self, part_xml, diskobj, start_sector, partnum, tbltype, part_type = None):
if tbltype not in ('gpt', 'msdos'):
_logger.error('Invalid tabletype specified: {0}. Must be one of: gpt,msdos.'.format(tbltype))
raise ValueError('Invalid tbltype.')
if tbltype == 'msdos' and part_type not in ('primary', 'extended', 'logical'):
_logger.error(('Table type msdos requires the part_type to be specified and must be one of: primary,'
'extended,logical (instead of: {0}).').format(part_type))
raise ValueError('The part_type must be specified for msdos tables')
self.xml = part_xml
_logger.debug('part_xml: {0}'.format(etree.tostring(self.xml, with_tail = False).decode('utf-8')))
_logger.debug('Partition number: {0}'.format(partnum))
_logger.debug('Partition table type: {0}.'.format(tbltype))
self.id = self.xml.attrib['id']
self.flags = set()
for f in self.xml.findall('partitionFlag'):
if f.text in aif.constants.PARTED_FLAGS:
self.flags.add(f.text)
self.flags = sorted(list(self.flags))
self.partnum = partnum
if tbltype == 'msdos':
if partnum > 4:
self.part_type = parted.PARTITION_LOGICAL
else:
if part_type == 'extended':
self.part_type = parted.PARTITION_EXTENDED
elif part_type == 'logical':
self.part_type = parted.PARTITION_LOGICAL
else:
self.part_type = parted.PARTITION_NORMAL
self.fs_type = self.xml.attrib['fsType'].lower()
if self.fs_type not in aif.constants.PARTED_FSTYPES:
_logger.error(('{0} is not a valid partition filesystem type; must be one of: '
'{1}').format(self.xml.attrib['fsType'],
', '.join(sorted(aif.constants.PARTED_FSTYPES))))
raise ValueError('Invalid partition filesystem type')
self.disk = diskobj
self.device = self.disk.device
self.devpath = '{0}{1}'.format(self.device.path, self.partnum)
_logger.debug('Assigned to disk: {0} ({1}) at path {2}'.format(self.disk.id, self.device, self.devpath))
self.is_hiformatted = False
sizes = {}
for s in ('start', 'stop'):
x = dict(zip(('from_bgn', 'size', 'type'),
aif.utils.convertSizeUnit(self.xml.attrib[s])))
sectors = x['size']
if x['type'] == '%':
sectors = int(self.device.getLength() * (0.01 * x['size']))
else:
sectors = int(aif.utils.size.convertStorage(x['size'],
x['type'],
target = 'B') / self.device.sectorSize)
sizes[s] = (sectors, x['from_bgn'])
if sizes['start'][1] is not None:
if sizes['start'][1]:
self.begin = sizes['start'][0] + 0
else:
self.begin = self.device.getLength() - sizes['start'][0]
else:
self.begin = sizes['start'][0] + start_sector
if sizes['stop'][1] is not None:
if sizes['stop'][1]:
self.end = sizes['stop'][0] + 0
else:
# This *technically* should be - 34, at least for gpt, but the alignment optimizer fixes it for us.
self.end = (self.device.getLength() - 1) - sizes['stop'][0]
else:
self.end = self.begin + sizes['stop'][0]
_logger.debug('Size: sector {0} to {1}.'.format(self.begin, self.end))
# TECHNICALLY we could craft the Geometry object with "length = ...", but it doesn't let us be explicit
# in configs. So we manually crunch the numbers and do it all at the end.
self.geometry = parted.Geometry(device = self.device,
start = self.begin,
end = self.end)
self.filesystem = parted.FileSystem(type = self.fs_type,
geometry = self.geometry)
self.partition = parted.Partition(disk = diskobj,
type = self.part_type,
geometry = self.geometry,
fs = self.filesystem)
for f in self.flags[:]:
flag_id = aif.constants.PARTED_FLAG_IDX[f]
if self.partition.isFlagAvailable(flag_id):
self.partition.setFlag(flag_id)
else:
self.flags.remove(f)
if tbltype == 'gpt' and self.xml.attrib.get('name'):
# The name attribute setting is b0rk3n, so we operate on the underlying PedPartition object.
# https://github.com/dcantrell/pyparted/issues/49#issuecomment-540096687
# https://github.com/dcantrell/pyparted/issues/65
# self.partition.name = self.xml.attrib.get('name')
_pedpart = self.partition.getPedPartition()
_pedpart.set_name(self.xml.attrib['name'])
_logger.debug('Partition name: {0}'.format(self.xml.attrib['name']))
#
# def detect(self):
# pass # TODO; blkinfo?

138
aif/disk/filesystem.py Normal file
View File

@ -0,0 +1,138 @@
import logging
import os
import subprocess
##
import psutil
from lxml import etree
##
import aif.disk.block as block
import aif.disk.luks as luks
import aif.disk.lvm as lvm
import aif.disk.mdadm as mdadm
import aif.utils
from . import _common


_BlockDev = _common.BlockDev
_logger = logging.getLogger(__name__)


FS_FSTYPES = aif.utils.kernelFilesystems()


class FS(object):
def __init__(self, fs_xml, sourceobj):
# http://storaged.org/doc/udisks2-api/latest/gdbus-org.freedesktop.UDisks2.Filesystem.html#gdbus-interface-org-freedesktop-UDisks2-Filesystem.top_of_page
# http://storaged.org/doc/udisks2-api/latest/ ?
self.xml = fs_xml
_logger.debug('fs_xml: {0}'.format(etree.tostring(self.xml, with_tail = False).decode('utf-8')))
self.id = self.xml.attrib['id']
if not isinstance(sourceobj, (block.Disk,
block.Partition,
luks.LUKS,
lvm.LV,
mdadm.Array)):
_logger.error(('sourceobj must be of type '
'aif.disk.block.Partition, '
'aif.disk.luks.LUKS, '
'aif.disk.lvm.LV, or'
'aif.disk.mdadm.Array.'))
raise TypeError('Invalid sourceobj type')
self.source = sourceobj
self.devpath = sourceobj.devpath
self.formatted = False
self.fstype = self.xml.attrib.get('type')
if self.fstype not in FS_FSTYPES:
_logger.error('{0} is not a supported filesystem type on this system.'.format(self.fstype))
raise ValueError('Invalid filesystem type')

def format(self):
if self.formatted:
return(None)
# This is a safeguard. We do *not* want to high-format a disk that is mounted.
aif.utils.checkMounted(self.devpath)
# TODO: Can I format with DBus/gobject-introspection? I feel like I *should* be able to, but BlockDev's fs
# plugin is *way* too limited in terms of filesystems and UDisks doesn't let you format that high-level.
_logger.info('Formatting {0}.'.format(self.devpath))
cmd_str = ['mkfs',
'-t', self.fstype]
for o in self.xml.findall('opt'):
cmd_str.append(o.attrib['name'])
if o.text:
cmd_str.append(o.text)
cmd_str.append(self.devpath)
cmd = subprocess.run(cmd_str, stdout = subprocess.PIPE, stderr = subprocess.PIPE)
_logger.info('Executed: {0}'.format(' '.join(cmd.args)))
if cmd.returncode != 0:
_logger.warning('Command returned non-zero status')
_logger.debug('Exit status: {0}'.format(str(cmd.returncode)))
for a in ('stdout', 'stderr'):
x = getattr(cmd, a)
if x:
_logger.debug('{0}: {1}'.format(a.upper(), x.decode('utf-8').strip()))
raise RuntimeError('Failed to format successfully')
else:
self.formatted = True
return(None)


class Mount(object):
# http://storaged.org/doc/udisks2-api/latest/gdbus-org.freedesktop.UDisks2.Filesystem.html#gdbus-method-org-freedesktop-UDisks2-Filesystem.Mount
def __init__(self, mount_xml, fsobj):
self.xml = mount_xml
_logger.debug('mount_xml: {0}'.format(etree.tostring(self.xml, with_tail = False).decode('utf-8')))
if not isinstance(fsobj, FS):
_logger.error('partobj must be of type aif.disk.filesystem.FS.')
raise TypeError('Invalid fsobj type')
_common.addBDPlugin('fs') # We *could* use the UDisks dbus to mount too, but best to stay within libblockdev.
self.id = self.xml.attrib['id']
self.fs = fsobj
self.source = self.fs.devpath
self.target = os.path.realpath(self.xml.attrib['target'])
self.opts = {}
for o in self.xml.findall('opt'):
self.opts[o.attrib['name']] = o.text
self.mounted = False

def _parseOpts(self):
opts = []
for k, v in self.opts.items():
if v and v is not True: # Python's boolean determination is weird sometimes.
opts.append('{0}={1}'.format(k, v))
else:
opts.append(k)
_logger.debug('Rendered mount opts: {0}'.format(opts))
return(opts)

def mount(self):
if self.mounted:
return(None)
_logger.info('Mounting {0} at {1} as {2}.'.format(self.source, self.target, self.fs.fstype))
os.makedirs(self.target, exist_ok = True)
opts = self._parseOpts()
_BlockDev.fs.mount(self.source,
self.target,
self.fs.fstype,
(','.join(opts) if opts else None))
self.mounted = True
_logger.debug('{0} mounted.'.format(self.source))
return(None)

def unmount(self, lazy = False, force = False):
self.updateMount()
if not self.mounted and not force:
return(None)
_logger.info('Unmounting {0}.'.format(self.target))
_BlockDev.fs.unmount(self.target,
lazy,
force)
self.mounted = False
return(None)

def updateMount(self):
_logger.debug('Fetching mount status for {0}'.format(self.source))
if self.source in [p.device for p in psutil.disk_partitions(all = True)]:
self.mounted = True
else:
self.mounted = False
return(None)

View File

@ -0,0 +1,158 @@
import logging
import os
import subprocess
##
import psutil
from lxml import etree
##
import aif.disk.block_fallback as block
import aif.disk.luks_fallback as luks
import aif.disk.lvm_fallback as lvm
import aif.disk.mdadm_fallback as mdadm
import aif.utils


_logger = logging.getLogger(__name__)


FS_FSTYPES = aif.utils.kernelFilesystems()


class FS(object):
def __init__(self, fs_xml, sourceobj):
self.xml = fs_xml
_logger.debug('fs_xml: {0}'.format(etree.tostring(self.xml, with_tail = False).decode('utf-8')))
if not isinstance(sourceobj, (block.Disk,
block.Partition,
luks.LUKS,
lvm.LV,
mdadm.Array)):
_logger.error(('sourceobj must be of type '
'aif.disk.block.Partition, '
'aif.disk.luks.LUKS, '
'aif.disk.lvm.LV, or'
'aif.disk.mdadm.Array.'))
raise TypeError('Invalid sourceobj type')
self.id = self.xml.attrib['id']
self.source = sourceobj
self.devpath = sourceobj.devpath
self.formatted = False
self.fstype = self.xml.attrib.get('type')
if self.fstype not in FS_FSTYPES:
_logger.error('{0} is not a supported filesystem type on this system.'.format(self.fstype))
raise ValueError('Invalid filesystem type')

def format(self):
if self.formatted:
return(None)
# This is a safeguard. We do *not* want to high-format a disk that is mounted.
aif.utils.checkMounted(self.devpath)
_logger.info('Formatting {0}.'.format(self.devpath))
cmd_str = ['mkfs',
'-t', self.fstype]
for o in self.xml.findall('opt'):
cmd_str.append(o.attrib['name'])
if o.text:
cmd_str.append(o.text)
cmd_str.append(self.devpath)
cmd = subprocess.run(cmd_str, stdout = subprocess.PIPE, stderr = subprocess.PIPE)
_logger.info('Executed: {0}'.format(' '.join(cmd.args)))
if cmd.returncode != 0:
_logger.warning('Command returned non-zero status')
_logger.debug('Exit status: {0}'.format(str(cmd.returncode)))
for a in ('stdout', 'stderr'):
x = getattr(cmd, a)
if x:
_logger.debug('{0}: {1}'.format(a.upper(), x.decode('utf-8').strip()))
raise RuntimeError('Failed to format successfully')
else:
self.formatted = True
return(None)


class Mount(object):
def __init__(self, mount_xml, fsobj):
self.xml = mount_xml
_logger.debug('mount_xml: {0}'.format(etree.tostring(self.xml, with_tail = False).decode('utf-8')))
self.id = self.xml.attrib['id']
if not isinstance(fsobj, FS):
_logger.error('partobj must be of type aif.disk.filesystem.FS.')
raise TypeError('Invalid fsobj type')
self.id = self.xml.attrib['id']
self.fs = fsobj
self.source = self.fs.devpath
self.target = os.path.realpath(self.xml.attrib['target'])
self.opts = {}
for o in self.xml.findall('opt'):
self.opts[o.attrib['name']] = o.text
self.mounted = False

def _parseOpts(self):
opts = []
for k, v in self.opts.items():
if v and v is not True: # Python's boolean determination is weird sometimes.
opts.append('{0}={1}'.format(k, v))
else:
opts.append(k)
_logger.debug('Rendered mount opts: {0}'.format(opts))
return(opts)

def mount(self):
if self.mounted:
return(None)
_logger.info('Mounting {0} at {1} as {2}.'.format(self.source, self.target, self.fs.fstype))
os.makedirs(self.target, exist_ok = True)
opts = self._parseOpts()
cmd_str = ['/usr/bin/mount',
'--types', self.fs.fstype]
if opts:
cmd_str.extend(['--options', ','.join(opts)])
cmd_str.extend([self.source, self.target])
cmd = subprocess.run(cmd_str, stdout = subprocess.PIPE, stderr = subprocess.PIPE)
_logger.info('Executed: {0}'.format(' '.join(cmd.args)))
if cmd.returncode != 0:
_logger.warning('Command returned non-zero status')
_logger.debug('Exit status: {0}'.format(str(cmd.returncode)))
for a in ('stdout', 'stderr'):
x = getattr(cmd, a)
if x:
_logger.debug('{0}: {1}'.format(a.upper(), x.decode('utf-8').strip()))
raise RuntimeError('Failed to mount successfully')
else:
self.mounted = True
_logger.debug('{0} mounted.'.format(self.source))
return(None)

def unmount(self, lazy = False, force = False):
self.updateMount()
if not self.mounted and not force:
return(None)
_logger.info('Unmounting {0}.'.format(self.target))
cmd_str = ['/usr/bin/umount']
if lazy:
cmd_str.append('--lazy')
if force:
cmd_str.append('--force')
cmd_str.append(self.target)
cmd = subprocess.run(cmd_str, stdout = subprocess.PIPE, stderr = subprocess.PIPE)
_logger.info('Executed: {0}'.format(' '.join(cmd.args)))
if cmd.returncode != 0:
_logger.warning('Command returned non-zero status')
_logger.debug('Exit status: {0}'.format(str(cmd.returncode)))
for a in ('stdout', 'stderr'):
x = getattr(cmd, a)
if x:
_logger.debug('{0}: {1}'.format(a.upper(), x.decode('utf-8').strip()))
raise RuntimeError('Failed to unmount successfully')
else:
self.mounted = False
_logger.debug('{0} unmounted.'.format(self.source))
return(None)

def updateMount(self):
_logger.debug('Fetching mount status for {0}'.format(self.source))
if self.source in [p.device for p in psutil.disk_partitions(all = True)]:
self.mounted = True
else:
self.mounted = False
return(None)

245
aif/disk/luks.py Normal file
View File

@ -0,0 +1,245 @@
import logging
import os
import secrets
import uuid
##
from lxml import etree
##
from . import _common
import aif.disk.block as block
import aif.disk.lvm as lvm
import aif.disk.mdadm as mdadm


_logger = logging.getLogger(__name__)


_BlockDev = _common.BlockDev


class LuksSecret(object):
def __init__(self, *args, **kwargs):
_common.addBDPlugin('crypto')
self.passphrase = None
self.size = 4096
self.path = None
_logger.info('Instantiated {0}.'.format(type(self).__name__))


class LuksSecretPassphrase(LuksSecret):
def __init__(self, passphrase):
super().__init__()
self.passphrase = passphrase


class LuksSecretFile(LuksSecret):
# TODO: might do a little tweaking in a later release to support *reading from* bytes.
def __init__(self, path, passphrase = None, bytesize = 4096):
super().__init__()
self.path = os.path.abspath(os.path.expanduser(path))
_logger.debug('Path canonized: {0} => {1}'.format(path, self.path))
self.passphrase = passphrase
self.size = bytesize # only used if passphrase == None
self._genSecret()

def _genSecret(self):
if not self.passphrase:
# TODO: is secrets.token_bytes safe for *persistent* random data?
self.passphrase = secrets.token_bytes(self.size)
if not isinstance(self.passphrase, bytes):
self.passphrase = self.passphrase.encode('utf-8')
_logger.debug('Secret generated.')
return(None)


class LUKS(object):
def __init__(self, luks_xml, partobj):
self.xml = luks_xml
_logger.debug('luks_xml: {0}'.format(etree.tostring(self.xml, with_tail = False).decode('utf-8')))
self.id = self.xml.attrib['id']
self.name = self.xml.attrib['name']
self.device = partobj
self.source = self.device.devpath
self.secrets = []
self.created = False
self.locked = True
if not isinstance(self.device, (block.Disk,
block.Partition,
lvm.LV,
mdadm.Array)):
_logger.error(('partobj must be of type '
'aif.disk.block.Disk, '
'aif.disk.block.Partition, '
'aif.disk.lvm.LV, or'
'aif.disk.mdadm.Array.'))
raise TypeError('Invalid partobj type')
_common.addBDPlugin('crypto')
self.devpath = '/dev/mapper/{0}'.format(self.name)
self.info = None

def addSecret(self, secretobj):
if not isinstance(secretobj, LuksSecret):
_logger.error('secretobj must be of type '
'aif.disk.luks.LuksSecret '
'(aif.disk.luks.LuksSecretPassphrase or '
'aif.disk.luks.LuksSecretFile).')
raise TypeError('Invalid secretobj type')
self.secrets.append(secretobj)
return(None)

def createSecret(self, secrets_xml = None):
_logger.info('Compiling secrets.')
if not secrets_xml: # Find all of them from self
_logger.debug('No secrets_xml specified; fetching from configuration block.')
for secret_xml in self.xml.findall('secrets'):
_logger.debug('secret_xml: {0}'.format(etree.tostring(secret_xml, with_tail = False).decode('utf-8')))
secretobj = None
secrettypes = set()
for s in secret_xml.iterchildren():
_logger.debug('secret_xml child: {0}'.format(etree.tostring(s, with_tail = False).decode('utf-8')))
secrettypes.add(s.tag)
if all((('passphrase' in secrettypes),
('keyFile' in secrettypes))):
# This is safe, because a valid config only has at most one of both types.
kf = secret_xml.find('keyFile')
secretobj = LuksSecretFile(kf.text, # path
passphrase = secret_xml.find('passphrase').text,
bytesize = kf.attrib.get('size', 4096)) # TECHNICALLY should be a no-op.
elif 'passphrase' in secrettypes:
secretobj = LuksSecretPassphrase(secret_xml.find('passphrase').text)
elif 'keyFile' in secrettypes:
kf = secret_xml.find('keyFile')
secretobj = LuksSecretFile(kf.text,
passphrase = None,
bytesize = kf.attrib.get('size', 4096))
self.secrets.append(secretobj)
else:
_logger.debug('A secrets_xml was specified.')
secretobj = None
secrettypes = set()
for s in secrets_xml.iterchildren():
_logger.debug('secrets_xml child: {0}'.format(etree.tostring(s, with_tail = False).decode('utf-8')))
secrettypes.add(s.tag)
if all((('passphrase' in secrettypes),
('keyFile' in secrettypes))):
# This is safe because a valid config only has at most one of both types.
kf = secrets_xml.find('keyFile')
secretobj = LuksSecretFile(kf.text, # path
passphrase = secrets_xml.find('passphrase').text,
bytesize = kf.attrib.get('size', 4096)) # TECHNICALLY should be a no-op.
elif 'passphrase' in secrettypes:
secretobj = LuksSecretPassphrase(secrets_xml.find('passphrase').text)
elif 'keyFile' in secrettypes:
kf = secrets_xml.find('keyFile')
secretobj = LuksSecretFile(kf.text,
passphrase = None,
bytesize = kf.attrib.get('size', 4096))
self.secrets.append(secretobj)
_logger.debug('Secrets compiled.')
return(None)

def create(self):
if self.created:
return(None)
_logger.info('Creating LUKS volume on {0}'.format(self.source))
if not self.secrets:
_logger.error('Cannot create a LUKS volume with no secrets added.')
raise RuntimeError('Cannot create a LUKS volume with no secrets')
for idx, secret in enumerate(self.secrets):
if idx == 0:
# TODO: add support for custom parameters for below?
_BlockDev.crypto.luks_format_luks2_blob(self.source,
None, # cipher (use default)
0, # keysize (use default)
secret.passphrase, # passphrase
0, # minimum entropy (use default)
_BlockDev.CryptoLUKSVersion.LUKS2, # LUKS version
None) # extra args
else:
_BlockDev.crypto.luks_add_key_blob(self.source,
self.secrets[0].passphrase,
secret.passphrase)
self.created = True
_logger.debug('Created LUKS volume.')
return(None)

def lock(self):
_logger.info('Locking: {0}'.format(self.source))
if not self.created:
_logger.error('Cannot lock a LUKS volume that does not exist yet.')
raise RuntimeError('Cannot lock non-existent volume')
if self.locked:
return(None)
_BlockDev.crypto.luks_close(self.name)
self.locked = True
_logger.debug('Locked.')
return(None)

def unlock(self, passphrase = None):
_logger.info('Unlocking: {0}'.format(self.source))
if not self.created:
_logger.error('Cannot unlock a LUKS volume that does not exist yet.')
raise RuntimeError('Cannot unlock non-existent volume')
if not self.locked:
return(None)
_BlockDev.crypto.luks_open_blob(self.source,
self.name,
self.secrets[0].passphrase,
False) # read-only
self.locked = False
_logger.debug('Unlocked.')
return(None)

def updateInfo(self):
_logger.info('Updating info.')
if self.locked:
_logger.error('Tried to fetch metadata about a locked volume. A volume must be unlocked first.')
raise RuntimeError('Must be unlocked to gather info')
info = {}
_info = _BlockDev.crypto.luks_info(self.devpath)
for k in dir(_info):
if k.startswith('_'):
continue
elif k in ('copy', ):
continue
v = getattr(_info, k)
if k == 'uuid':
v = uuid.UUID(hex = v)
info[k] = v
info['_cipher'] = '{cipher}-{mode}'.format(**info)
self.info = info
_logger.debug('Rendered updated info: {0}'.format(info))
return(None)

def writeConf(self, chroot_base, init_hook = True):
_logger.info('Generating crypttab.')
if not self.secrets:
_logger.error('Secrets must be added before the configuration can be written.')
raise RuntimeError('Missing secrets')
conf = os.path.join(chroot_base, 'etc', 'crypttab')
with open(conf, 'r') as fh:
conflines = fh.read().splitlines()
# Get UUID
disk_uuid = None
uuid_dir = '/dev/disk/by-uuid'
for u in os.listdir(uuid_dir):
d = os.path.join(uuid_dir, u)
if os.path.realpath(d) == self.source:
disk_uuid = u
if disk_uuid:
identifer = 'UUID={0}'.format(disk_uuid)
else:
# This is *not* ideal, but better than nothing.
identifer = self.source
primary_key = self.secrets[0]
luksinfo = '{0}\t{1}\t{2}\tluks'.format(self.name,
identifer,
(primary_key.path if primary_key.path else '-'))
if luksinfo not in conflines:
with open(conf, 'a') as fh:
fh.write('{0}\n'.format(luksinfo))
if init_hook:
_logger.debug('Symlinked initramfs crypttab.')
os.symlink('/etc/crypttab', os.path.join(chroot_base, 'etc', 'crypttab.initramfs'))
_logger.debug('Generated crypttab line: {0}'.format(luksinfo))
return(None)

331
aif/disk/luks_fallback.py Normal file
View File

@ -0,0 +1,331 @@
import logging
import os
import re
import secrets
import subprocess
import tempfile
import uuid
##
import parse
from lxml import etree
##
import aif.disk.block_fallback as block
import aif.disk.lvm_fallback as lvm
import aif.disk.mdadm_fallback as mdadm


_logger = logging.getLogger(__name__)


class LuksSecret(object):
def __init__(self, *args, **kwargs):
self.passphrase = None
self.size = 4096
self.path = None
_logger.info('Instantiated {0}.'.format(type(self).__name__))


class LuksSecretPassphrase(LuksSecret):
def __init__(self, passphrase):
super().__init__()
self.passphrase = passphrase


class LuksSecretFile(LuksSecret):
# TODO: might do a little tweaking in a later release to support *reading from* bytes.
def __init__(self, path, passphrase = None, bytesize = 4096):
super().__init__()
self.path = os.path.realpath(path)
_logger.debug('Path canonized: {0} => {1}'.format(path, self.path))
self.passphrase = passphrase
self.size = bytesize # only used if passphrase == None
self._genSecret()

def _genSecret(self):
if not self.passphrase:
# TODO: is secrets.token_bytes safe for *persistent* random data?
self.passphrase = secrets.token_bytes(self.size)
if not isinstance(self.passphrase, bytes):
self.passphrase = self.passphrase.encode('utf-8')
_logger.debug('Secret generated.')
return(None)


class LUKS(object):
def __init__(self, luks_xml, partobj):
self.xml = luks_xml
_logger.debug('luks_xml: {0}'.format(etree.tostring(self.xml, with_tail = False).decode('utf-8')))
self.id = self.xml.attrib['id']
self.name = self.xml.attrib['name']
self.device = partobj
self.source = self.device.devpath
self.secrets = []
self.created = False
self.locked = True
if not isinstance(self.device, (block.Disk,
block.Partition,
lvm.LV,
mdadm.Array)):
_logger.error(('partobj must be of type '
'aif.disk.block.Disk, '
'aif.disk.block.Partition, '
'aif.disk.lvm.LV, or'
'aif.disk.mdadm.Array.'))
raise TypeError('Invalid partobj type')
self.devpath = '/dev/mapper/{0}'.format(self.name)
self.info = None

def addSecret(self, secretobj):
if not isinstance(secretobj, LuksSecret):
_logger.error('secretobj must be of type '
'aif.disk.luks.LuksSecret '
'(aif.disk.luks.LuksSecretPassphrase or '
'aif.disk.luks.LuksSecretFile).')
raise TypeError('Invalid secretobj type')
self.secrets.append(secretobj)
return(None)

def createSecret(self, secrets_xml = None):
_logger.info('Compiling secrets.')
if not secrets_xml: # Find all of them from self
_logger.debug('No secrets_xml specified; fetching from configuration block.')
for secret_xml in self.xml.findall('secrets'):
_logger.debug('secret_xml: {0}'.format(etree.tostring(secret_xml, with_tail = False).decode('utf-8')))
secretobj = None
secrettypes = set()
for s in secret_xml.iterchildren():
_logger.debug('secret_xml child: {0}'.format(etree.tostring(s, with_tail = False).decode('utf-8')))
secrettypes.add(s.tag)
if all((('passphrase' in secrettypes),
('keyFile' in secrettypes))):
# This is safe, because a valid config only has at most one of both types.
kf = secret_xml.find('keyFile')
secretobj = LuksSecretFile(kf.text, # path
passphrase = secret_xml.find('passphrase').text,
bytesize = kf.attrib.get('size', 4096)) # TECHNICALLY should be a no-op.
elif 'passphrase' in secrettypes:
secretobj = LuksSecretPassphrase(secret_xml.find('passphrase').text)
elif 'keyFile' in secrettypes:
kf = secret_xml.find('keyFile')
secretobj = LuksSecretFile(kf.text,
passphrase = None,
bytesize = kf.attrib.get('size', 4096))
self.secrets.append(secretobj)
else:
_logger.debug('A secrets_xml was specified.')
secretobj = None
secrettypes = set()
for s in secrets_xml.iterchildren():
_logger.debug('secrets_xml child: {0}'.format(etree.tostring(s, with_tail = False).decode('utf-8')))
secrettypes.add(s.tag)
if all((('passphrase' in secrettypes),
('keyFile' in secrettypes))):
# This is safe, because a valid config only has at most one of both types.
kf = secrets_xml.find('keyFile')
secretobj = LuksSecretFile(kf.text, # path
passphrase = secrets_xml.find('passphrase').text,
bytesize = kf.attrib.get('size', 4096)) # TECHNICALLY should be a no-op.
elif 'passphrase' in secrettypes:
secretobj = LuksSecretPassphrase(secrets_xml.find('passphrase').text)
elif 'keyFile' in secrettypes:
kf = secrets_xml.find('keyFile')
secretobj = LuksSecretFile(kf.text,
passphrase = None,
bytesize = kf.attrib.get('size', 4096))
self.secrets.append(secretobj)
_logger.debug('Secrets compiled.')
return(None)

def create(self):
if self.created:
return(None)
_logger.info('Creating LUKS volume on {0}'.format(self.source))
if not self.secrets:
_logger.error('Cannot create a LUKS volume with no secrets added.')
raise RuntimeError('Cannot create a LUKS volume with no secrets')
for idx, secret in enumerate(self.secrets):
if idx == 0:
# TODO: add support for custom parameters for below?
cmd_str = ['cryptsetup',
'--batch-mode',
'luksFormat',
'--type', 'luks2',
'--key-file', '-',
self.source]
cmd = subprocess.run(cmd_str,
input = secret.passphrase,
stdout = subprocess.PIPE,
stderr = subprocess.PIPE)
if cmd.returncode != 0:
_logger.warning('Command returned non-zero status')
_logger.debug('Exit status: {0}'.format(str(cmd.returncode)))
for a in ('stdout', 'stderr'):
x = getattr(cmd, a)
if x:
_logger.debug('{0}: {1}'.format(a.upper(), x.decode('utf-8').strip()))
raise RuntimeError('Failed to encrypt successfully')
else:
# TODO: does the key-file need to be the same path in the installed system?
tmpfile = tempfile.mkstemp()
with open(tmpfile[1], 'wb') as fh:
fh.write(secret.passphrase)
cmd_str = ['cryptsetup',
'--batch-mode',
'luksAdd',
'--type', 'luks2',
'--key-file', '-',
self.source,
tmpfile[1]]
cmd = subprocess.run(cmd_str,
input = self.secrets[0].passphrase,
stdout = subprocess.PIPE,
stderr = subprocess.PIPE)

os.remove(tmpfile[1])
if cmd.returncode != 0:
_logger.warning('Command returned non-zero status')
_logger.debug('Exit status: {0}'.format(str(cmd.returncode)))
for a in ('stdout', 'stderr'):
x = getattr(cmd, a)
if x:
_logger.debug('{0}: {1}'.format(a.upper(), x.decode('utf-8').strip()))
raise RuntimeError('Failed to encrypt successfully')
self.created = True
return(None)

def lock(self):
if not self.created:
raise RuntimeError('Cannot lock a LUKS volume before it is created')
if self.locked:
return(None)
cmd_str = ['cryptsetup',
'--batch-mode',
'luksClose',
self.name]
cmd = subprocess.run(cmd_str, stdout = subprocess.PIPE, stderr = subprocess.PIPE)
if cmd.returncode != 0:
_logger.warning('Command returned non-zero status')
_logger.debug('Exit status: {0}'.format(str(cmd.returncode)))
for a in ('stdout', 'stderr'):
x = getattr(cmd, a)
if x:
_logger.debug('{0}: {1}'.format(a.upper(), x.decode('utf-8').strip()))
raise RuntimeError('Failed to lock successfully')
self.locked = True
return(None)

def unlock(self, passphrase = None):
if not self.created:
raise RuntimeError('Cannot unlock a LUKS volume before it is created')
if not self.locked:
return(None)
cmd_str = ['cryptsetup',
'--batch-mode',
'luksOpen',
'--key-file', '-',
self.source,
self.name]
cmd = subprocess.run(cmd_str, input = self.secrets[0].passphrase)
if cmd.returncode != 0:
_logger.warning('Command returned non-zero status')
_logger.debug('Exit status: {0}'.format(str(cmd.returncode)))
for a in ('stdout', 'stderr'):
x = getattr(cmd, a)
if x:
_logger.debug('{0}: {1}'.format(a.upper(), x.decode('utf-8').strip()))
raise RuntimeError('Failed to unlock successfully')
self.locked = False
return(None)

def updateInfo(self):
if self.locked:
raise RuntimeError('Must be unlocked to gather info')
info = {}
cmd_str = ['cryptsetup',
'--batch-mode',
'luksDump',
self.source]
cmd = subprocess.run(cmd_str, stdout = subprocess.PIPE, stderr = subprocess.PIPE)
if cmd.returncode != 0:
_logger.warning('Command returned non-zero status')
_logger.debug('Exit status: {0}'.format(str(cmd.returncode)))
for a in ('stdout', 'stderr'):
x = getattr(cmd, a)
if x:
_logger.debug('{0}: {1}'.format(a.upper(), x.decode('utf-8').strip()))
raise RuntimeError('Failed to fetch info successfully')
_info = cmd.stdout.decode('utf-8')
k = None
# I wish there was a better way to do this but I sure as heck am not writing a regex to do it.
# https://gitlab.com/cryptsetup/cryptsetup/issues/511
# https://pypi.org/project/parse/
_tpl = ('LUKS header information\nVersion: {header_ver}\nEpoch: {epoch_ver}\n'
'Metadata area: {metadata_pos} [bytes]\nKeyslots area: {keyslots_pos} [bytes]\n'
'UUID: {uuid}\nLabel: {label}\nSubsystem: {subsystem}\n'
'Flags: {flags}\n\nData segments:\n 0: crypt\n '
'offset: {offset_bytes} [bytes]\n length: {crypt_length}\n '
'cipher: {crypt_cipher}\n sector: {sector_size} [bytes]\n\nKeyslots:\n 0: luks2\n '
'Key: {key_size} bits\n Priority: {priority}\n '
'Cipher: {keyslot_cipher}\n Cipher key: {cipher_key_size} bits\n '
'PBKDF: {pbkdf}\n Time cost: {time_cost}\n Memory: {memory}\n '
'Threads: {threads}\n Salt: {key_salt} \n AF stripes: {af_stripes}\n '
'AF hash: {af_hash}\n Area offset:{keyslot_offset} [bytes]\n '
'Area length:{keyslot_length} [bytes]\n Digest ID: {keyslot_id}\nTokens:\nDigests:\n '
'0: pbkdf2\n Hash: {token_hash}\n Iterations: {token_iterations}\n '
'Salt: {token_salt}\n Digest: {token_digest}\n\n')
info = parse.parse(_tpl, _info).named
for k, v in info.items():
# Technically we can do this in the _tpl string, but it's hard to visually parse.
if k in ('af_stripes', 'cipher_key_size', 'epoch_ver', 'header_ver', 'key_size', 'keyslot_id',
'keyslot_length', 'keyslot_offset', 'keyslots_pos', 'memory', 'metadata_pos', 'offset_bytes',
'sector_size', 'threads', 'time_cost', 'token_iterations'):
v = int(v)
elif k in ('key_salt', 'token_digest', 'token_salt'):
v = bytes.fromhex(re.sub(r'\s+', '', v))
elif k in ('label', 'subsystem'):
if re.search(r'\(no\s+', v.lower()):
v = None
elif k == 'flags':
if v.lower() == '(no flags)':
v = []
else:
# Is this pace-separated or comma-separated? TODO.
v = [i.strip() for i in v.split() if i.strip() != '']
elif k == 'uuid':
v = uuid.UUID(hex = v)
self.info = info
_logger.debug('Rendered updated info: {0}'.format(self.inf))
return(None)

def writeConf(self, chroot_base, init_hook = True):
_logger.info('Generating crypttab.')
if not self.secrets:
_logger.error('Secrets must be added before the configuration can be written.')
raise RuntimeError('Missing secrets')
conf = os.path.join(chroot_base, 'etc', 'crypttab')
with open(conf, 'r') as fh:
conflines = fh.read().splitlines()
# Get UUID
disk_uuid = None
uuid_dir = '/dev/disk/by-uuid'
for u in os.listdir(uuid_dir):
d = os.path.join(uuid_dir, u)
if os.path.realpath(d) == self.source:
disk_uuid = u
if disk_uuid:
identifer = 'UUID={0}'.format(disk_uuid)
else:
# This is *not* ideal, but better than nothing.
identifer = self.source
primary_key = self.secrets[0]
luksinfo = '{0}\t{1}\t{2}\tluks'.format(self.name,
identifer,
(primary_key.path if primary_key.path else '-'))
if luksinfo not in conflines:
with open(conf, 'a') as fh:
fh.write('{0}\n'.format(luksinfo))
if init_hook:
_logger.debug('Symlinked initramfs crypttab.')
os.symlink('/etc/crypttab', os.path.join(chroot_base, 'etc', 'crypttab.initramfs'))
_logger.debug('Generated crypttab line: {0}'.format(luksinfo))
return(None)

340
aif/disk/lvm.py Normal file
View File

@ -0,0 +1,340 @@
import logging
# import uuid
##
from lxml import etree
##
from . import _common
import aif.utils
import aif.disk.block as block
import aif.disk.luks as luks
import aif.disk.mdadm as mdadm


_logger = logging.getLogger(__name__)


_BlockDev = _common.BlockDev


class LV(object):
def __init__(self, lv_xml, vgobj):
self.xml = lv_xml
_logger.debug('lv_xml: {0}'.format(etree.tostring(self.xml, with_tail = False).decode('utf-8')))
self.id = self.xml.attrib('id')
self.name = self.xml.attrib('name')
self.vg = vgobj
self.qualified_name = '{0}/{1}'.format(self.vg.name, self.name)
_logger.debug('Qualified name: {0}'.format(self.qualified_name))
self.pvs = []
if not isinstance(self.vg, VG):
_logger.debug('vgobj must be of type aif.disk.lvm.VG')
raise TypeError('Invalid vgobj type')
_common.addBDPlugin('lvm')
self.info = None
self.devpath = '/dev/{0}/{1}'.format(self.vg.name, self.name)
self.created = False
self.updateInfo()
self._initLV()

def _initLV(self):
self.pvs = []
_indexed_pvs = {i.id: i for i in self.vg.pvs}
for pe in self.xml.findall('pvMember'):
_logger.debug('Found PV element: {0}'.format(etree.tostring(pe, with_tail = False).decode('utf-8')))
pv_id = pe.attrib('source')
if pv_id in _indexed_pvs.keys():
self.pvs.append(_indexed_pvs[pv_id])
if not self.pvs: # We get all in the VG instead since none were explicitly assigned
_logger.debug('No PVs explicitly designated to VG; adding all.')
self.pvs = self.vg.pvs
# Size processing. We have to do this after indexing PVs.
# If not x['type'], assume *extents*, not sectors
self.size = self.xml.attrib('size') # Convert to bytes. Can get max from _BlockDev.lvm.vginfo(<VG>).free TODO
x = dict(zip(('from_bgn', 'size', 'type'),
aif.utils.convertSizeUnit(self.xml.attrib['size'])))
# self.size is bytes
self.size = x['size']
_extents = {'size': self.vg.info['extent_size'],
'total': 0} # We can't use self.vg.info['extent_count'] because selective PVs.
_sizes = {'total': 0,
'free': 0}
_vg_pe = self.vg.info['extent_size']
for pv in self.pvs:
_sizes['total'] += pv.info['pv_size']
_sizes['free'] += pv.info['pv_free']
_extents['total'] += int(pv.info['pv_size'] / _extents['size'])
if x['type'] == '%':
self.size = int(_sizes['total'] * (0.01 * self.size))
elif x['type'] is None:
self.size = int(self.size * _extents['size'])
else:
self.size = int(aif.utils.size.convertStorage(x['size'],
x['type'],
target = 'B'))
if self.size >= _sizes['total']:
self.size = 0
return(None)

def create(self):
if not self.pvs:
_logger.error('Cannot create LV with no associated PVs')
raise RuntimeError('Missing PVs')
opts = [_BlockDev.ExtraArg.new('--reportformat', 'json')]
# FUCK. LVM. You can't *specify* a UUID.
# u = uuid.uuid4()
# opts.append(_BlockDev.ExtraArg.new('--uuid', str(u)))
# for t in self.tags:
# opts.append(_BlockDev.ExtraArg.new('--addtag', t))
_BlockDev.lvm.lvcreate(self.vg.name,
self.name,
self.size,
None,
[i.devpath for i in self.pvs],
opts)
self.vg.lvs.append(self)
self.created = True
self.updateInfo()
self.vg.updateInfo()
return(None)

def start(self):
_logger.info('Activating LV {0} in VG {1}.'.format(self.name, self.vg.name))
_BlockDev.lvm.lvactivate(self.vg.name,
self.name,
True,
None)
self.updateInfo()
return(None)

def stop(self):
_logger.info('Deactivating LV {0} in VG {1}.'.format(self.name, self.vg.name))
_BlockDev.lvm.lvdeactivate(self.vg.name,
self.name,
None)
self.updateInfo()
return(None)

def updateInfo(self):
if not self.created:
_logger.warning('Attempted to updateInfo on an LV not created yet.')
return(None)
_info = _BlockDev.lvm.lvinfo(self.vg.name, self.name)
# TODO: parity with lvm_fallback.LV.updateInfo
# key names currently (probably) don't match and need to confirm the information's all present
info = {}
for k in dir(_info):
if k.startswith('_'):
continue
elif k in ('copy',):
continue
v = getattr(_info, k)
info[k] = v
self.info = info
_logger.debug('Rendered info: {0}'.format(info))
return(None)


class PV(object):
def __init__(self, pv_xml, partobj):
self.xml = pv_xml
_logger.debug('pv_xml: {0}'.format(etree.tostring(self.xml, with_tail = False).decode('utf-8')))
self.id = self.xml.attrib('id')
self.source = self.xml.attrib('source')
self.device = partobj
if not isinstance(self.device, (block.Disk,
block.Partition,
luks.LUKS,
mdadm.Array)):
_logger.error(('partobj must be of type '
'aif.disk.block.Disk, '
'aif.disk.block.Partition, '
'aif.disk.luks.LUKS, or'
'aif.disk.mdadm.Array.'))
raise TypeError('Invalid partobj type')
_common.addBDPlugin('lvm')
self.devpath = self.device.devpath
self.is_pooled = False
self.meta = None
self._parseMeta()

def _parseMeta(self):
# Note, the "UUID" for LVM is *not* a true UUID (RFC4122) so we don't convert it.
# https://unix.stackexchange.com/questions/173722/what-is-the-uuid-format-used-by-lvm
# TODO: parity with lvm_fallback.PV._parseMeta
# key names currently (probably) don't match and need to confirm the information's all present
meta = {}
try:
_meta = _BlockDev.lvm.pvinfo(self.devpath)
except _BlockDev.LVMError:
_logger.debug('PV device is not a PV yet.')
self.meta = None
self.is_pooled = False
return(None)
for k in dir(_meta):
if k.startswith('_'):
continue
elif k in ('copy', ):
continue
v = getattr(_meta, k)
meta[k] = v
self.meta = meta
_logger.debug('Rendered meta: {0}'.format(meta))
self.is_pooled = True
return(None)

def prepare(self):
try:
if not self.meta:
self._parseMeta()
if self.meta:
vg = self.meta['vg_name']
# LVM is SO. DUMB.
# If you're using LVM, seriously - just switch your model to mdadm. It lets you do things like
# remove disks live without restructuring the entire thing.
# That said, because the config references partitions/disks/arrays/etc. created *in the same config*,
# and it's all dependent on block devices defined in the thing, we can be reckless here.
# I'd like to take the time now to remind you to NOT RUN AIF-NG ON A "PRODUCTION"-STATE MACHINE.
# At least until I can maybe find a better way to determine which LVs to reduce on multi-LV VGs
# so I can *then* use lvresize in a balanced manner, vgreduce, and pvmove/pvremove and not kill
# everything.
# TODO.
for lv in _BlockDev.lvm.lvs():
if lv.vg_name == vg:
_logger.info('Removing LV {0} from VG {1}.'.format(lv.lv_name, vg))
_BlockDev.lvm.lvremove(vg, lv.lv_name)
_logger.debug('Reducing VG {0}.'.format(vg))
_BlockDev.lvm.vgreduce(vg)
_logger.info('Removing VG {0}.'.format(vg))
_BlockDev.lvm.vgremove(vg) # This *shouldn't* fail. In theory. But LVM is lel.
_logger.info('Removing PV {0}.'.format(self.devpath))
_BlockDev.lvm.pvremove(self.devpath)
# Or if I can get this working properly. Shame it isn't automagic.
# Seems to kill the LV by dropping a PV under it. Makes sense, but STILL. LVM IS SO DUMB.
# _BlockDev.lvm.vgdeactivate(vg)
# _BlockDev.lvm.pvremove(self.devpath)
# _BlockDev.lvm.vgreduce(vg)
# _BlockDev.lvm.vgactivate(vg)
##
self.meta = None
self.is_pooled = False
except _BlockDev.LVMError:
self.meta = None
self.is_pooled = False
opts = [_BlockDev.ExtraArg.new('--reportformat', 'json')]
# FUCK. LVM. You can't *specify* a UUID.
# u = uuid.uuid4()
# opts.append(_BlockDev.ExtraArg.new('--uuid', str(u)))
_BlockDev.lvm.pvcreate(self.devpath,
0,
0,
opts)
_logger.info('Created PV {0} with opts {1}'.format(self.devpath, opts))
self._parseMeta()
return(None)


class VG(object):
def __init__(self, vg_xml):
self.xml = vg_xml
_logger.debug('vg_xml: {0}'.format(etree.tostring(self.xml, with_tail = False).decode('utf-8')))
self.id = self.xml.attrib('id')
self.name = self.xml.attrib('name')
self.pe_size = self.xml.attrib.get('extentSize', 0)
if self.pe_size:
x = dict(zip(('from_bgn', 'size', 'type'),
aif.utils.convertSizeUnit(self.pe_size)))
if x['type']:
self.pe_size = aif.utils.size.convertStorage(self.pe_size,
x['type'],
target = 'B')
if not aif.utils.isPowerofTwo(self.pe_size):
_logger.error('The PE size must be a power of two (in bytes).')
raise ValueError('Invalid PE value')
self.lvs = []
self.pvs = []
# self.tags = []
# for te in self.xml.findall('tags/tag'):
# self.tags.append(te.text)
_common.addBDPlugin('lvm')
self.devpath = '/dev/{0}'.format(self.name)
self.info = None
self.created = False

def addPV(self, pvobj):
if not isinstance(pvobj, PV):
_logger.error('pvobj must be of type aif.disk.lvm.PV.')
raise TypeError('Invalid pvbobj type')
pvobj.prepare()
self.pvs.append(pvobj)
return(None)

def create(self):
if not self.pvs:
_logger.error('Cannot create a VG with no PVs.')
raise RuntimeError('Missing PVs')
opts = [_BlockDev.ExtraArg.new('--reportformat', 'json')]
# FUCK. LVM. You can't *specify* a UUID.
# u = uuid.uuid4()
# opts.append(_BlockDev.ExtraArg.new('--uuid', str(u)))
# for t in self.tags:
# opts.append(_BlockDev.ExtraArg.new('--addtag', t))
_BlockDev.lvm.vgcreate(self.name,
[p.devpath for p in self.pvs],
self.pe_size,
opts)
for pv in self.pvs:
pv._parseMeta()
self.created = True
self.updateInfo()
return(None)

def createLV(self, lv_xml = None):
if not self.created:
_logger.info('Attempted to add an LV to a VG before it was created.')
raise RuntimeError('LV before VG creation')
# If lv_xml is None, we loop through our own XML.
if lv_xml:
_logger.debug('Explicit lv_xml specified: {0}'.format(etree.tostring(lv_xml,
with_tail = False).decode('utf-8')))
lv = LV(lv_xml, self)
lv.create()
# self.lvs.append(lv)
else:
for le in self.xml.findall('logicalVolumes/lv'):
_logger.debug('Found lv element: {0}'.format(etree.tostring(le, with_tail = False).decode('utf-8')))
lv = LV(le, self)
lv.create()
# self.lvs.append(lv)
self.updateInfo()
return(None)

def start(self):
_logger.info('Activating VG: {0}.'.format(self.name))
_BlockDev.lvm.vgactivate(self.name)
self.updateInfo()
return(None)

def stop(self):
_logger.info('Deactivating VG: {0}.'.format(self.name))
_BlockDev.lvm.vgdeactivate(self.name)
self.updateInfo()
return(None)

def updateInfo(self):
if not self.created:
_logger.warning('Attempted to updateInfo on a VG not created yet.')
return(None)
_info = _BlockDev.lvm.vginfo(self.name)
# TODO: parity with lvm_fallback.VG.updateInfo
# key names currently (probably) don't match and need to confirm the information's all present
info = {}
for k in dir(_info):
if k.startswith('_'):
continue
elif k in ('copy',):
continue
v = getattr(_info, k)
info[k] = v
self.info = info
_logger.debug('Rendered info: {0}'.format(info))
return(None)

461
aif/disk/lvm_fallback.py Normal file
View File

@ -0,0 +1,461 @@
import datetime
import json
import logging
import subprocess
##
from lxml import etree
##
import aif.utils
import aif.disk.block_fallback as block
import aif.disk.luks_fallback as luks
import aif.disk.mdadm_fallback as mdadm


_logger = logging.getLogger(__name__)


class LV(object):
def __init__(self, lv_xml, vgobj):
self.xml = lv_xml
_logger.debug('lv_xml: {0}'.format(etree.tostring(self.xml, with_tail = False).decode('utf-8')))
self.id = self.xml.attrib('id')
self.name = self.xml.attrib('name')
self.vg = vgobj
self.qualified_name = '{0}/{1}'.format(self.vg.name, self.name)
_logger.debug('Qualified name: {0}'.format(self.qualified_name))
self.pvs = []
if not isinstance(self.vg, VG):
_logger.debug('vgobj must be of type aif.disk.lvm.VG')
raise TypeError('Invalid vgobj type')
self.info = None
self.devpath = '/dev/{0}/{1}'.format(self.vg.name, self.name)
self.created = False
self.updateInfo()
self._initLV()

def _initLV(self):
self.pvs = []
_indexed_pvs = {i.id: i for i in self.vg.pvs}
for pe in self.xml.findall('pvMember'):
_logger.debug('Found PV element: {0}'.format(etree.tostring(pe, with_tail = False).decode('utf-8')))
pv_id = pe.attrib('source')
if pv_id in _indexed_pvs.keys():
self.pvs.append(_indexed_pvs[pv_id])
if not self.pvs: # We get all in the VG instead since none were explicitly assigned
_logger.debug('No PVs explicitly designated to VG; adding all.')
self.pvs = self.vg.pvs
# Size processing. We have to do this after indexing PVs.
# If not x['type'], assume *extents*, not sectors
self.size = self.xml.attrib('size') # Convert to bytes. Can get max from _BlockDev.lvm.vginfo(<VG>).free TODO
x = dict(zip(('from_bgn', 'size', 'type'),
aif.utils.convertSizeUnit(self.xml.attrib['size'])))
# self.size is bytes
self.size = x['size']
_extents = {'size': self.vg.info['extent_size'],
'total': 0} # We can't use self.vg.info['extent_count'] because selective PVs.
_sizes = {'total': 0,
'free': 0}
_vg_pe = self.vg.info['extent_size']
for pv in self.pvs:
_sizes['total'] += pv.info['pv_size']
_sizes['free'] += pv.info['pv_free']
_extents['total'] += int(pv.info['pv_size'] / _extents['size'])
if x['type'] == '%':
self.size = int(_sizes['total'] * (0.01 * self.size))
elif x['type'] is None:
self.size = int(self.size * _extents['size'])
else:
self.size = int(aif.utils.size.convertStorage(x['size'],
x['type'],
target = 'B'))
if self.size >= _sizes['total']:
self.size = 0
return(None)

def create(self):
if not self.pvs:
_logger.error('Cannot create LV with no associated PVs')
raise RuntimeError('Missing PVs')
cmd_str = ['lvcreate',
'--reportformat', 'json']
if self.size > 0:
cmd_str.extend(['--size', self.size])
elif self.size == 0:
cmd_str.extend(['--extents', '100%FREE'])
cmd_str.extend([self.name,
self.vg.name])
cmd = subprocess.run(cmd_str, stdout = subprocess.PIPE, stderr = subprocess.PIPE)
_logger.info('Executed: {0}'.format(' '.join(cmd.args)))
if cmd.returncode != 0:
_logger.warning('Command returned non-zero status')
_logger.debug('Exit status: {0}'.format(str(cmd.returncode)))
for a in ('stdout', 'stderr'):
x = getattr(cmd, a)
if x:
_logger.debug('{0}: {1}'.format(a.upper(), x.decode('utf-8').strip()))
raise RuntimeError('Failed to create LV successfully')
self.vg.lvs.append(self)
self.created = True
self.updateInfo()
self.vg.updateInfo()
return(None)

def start(self):
_logger.info('Activating LV {0} in VG {1}.'.format(self.name, self.vg.name))
cmd_str = ['lvchange',
'--activate', 'y',
'--reportformat', 'json',
self.qualified_name]
cmd = subprocess.run(cmd_str, stdout = subprocess.PIPE, stderr = subprocess.PIPE)
_logger.info('Executed: {0}'.format(' '.join(cmd.args)))
if cmd.returncode != 0:
_logger.warning('Command returned non-zero status')
_logger.debug('Exit status: {0}'.format(str(cmd.returncode)))
for a in ('stdout', 'stderr'):
x = getattr(cmd, a)
if x:
_logger.debug('{0}: {1}'.format(a.upper(), x.decode('utf-8').strip()))
raise RuntimeError('Failed to activate LV successfully')
self.updateInfo()
return(None)

def stop(self):
_logger.info('Deactivating LV {0} in VG {1}.'.format(self.name, self.vg.name))
cmd_str = ['lvchange',
'--activate', 'n',
'--reportformat', 'json',
self.qualified_name]
cmd = subprocess.run(cmd_str, stdout = subprocess.PIPE, stderr = subprocess.PIPE)
_logger.info('Executed: {0}'.format(' '.join(cmd.args)))
if cmd.returncode != 0:
_logger.warning('Command returned non-zero status')
_logger.debug('Exit status: {0}'.format(str(cmd.returncode)))
for a in ('stdout', 'stderr'):
x = getattr(cmd, a)
if x:
_logger.debug('{0}: {1}'.format(a.upper(), x.decode('utf-8').strip()))
raise RuntimeError('Failed to deactivate successfully')
self.updateInfo()
return(None)

def updateInfo(self):
if not self.created:
_logger.warning('Attempted to updateInfo on an LV not created yet.')
return(None)
info = {}
cmd = ['lvs',
'--binary',
'--nosuffix',
'--units', 'b',
'--options', '+lvall',
'--reportformat', 'json',
self.qualified_name]
_info = subprocess.run(cmd, stdout = subprocess.PIPE, stderr = subprocess.PIPE)
_logger.info('Executed: {0}'.format(' '.join(_info.args)))
if _info.returncode != 0:
_logger.warning('Command returned non-zero status')
_logger.debug('Exit status: {0}'.format(str(_info.returncode)))
for a in ('stdout', 'stderr'):
x = getattr(cmd, a)
if x:
_logger.debug('{0}: {1}'.format(a.upper(), x.decode('utf-8').strip()))
self.info = None
self.created = False
return(None)
_info = json.loads(_info.stdout.decode('utf-8'))['report'][0]['vg'][0]
for k, v in _info.items():
# ints
if k in ('lv_fixed_minor', 'lv_kernel_major', 'lv_kernel_minor', 'lv_kernel_read_ahead', 'lv_major',
'lv_metadata_size', 'lv_minor', 'lv_size', 'seg_count'):
try:
v = int(v)
except ValueError:
v = 0
# booleans - LVs apparently have a third value, "-1", which is "unknown". We translate to None.
elif k in ('lv_active_exclusively', 'lv_active_locally', 'lv_active_remotely', 'lv_allocation_locked',
'lv_check_needed', 'lv_converting', 'lv_device_open', 'lv_historical', 'lv_image_synced',
'lv_inactive_table', 'lv_initial_image_sync', 'lv_live_table', 'lv_merge_failed', 'lv_merging',
'lv_skip_activation', 'lv_snapshot_invalid', 'lv_suspended'):
if v == '-1':
v = None
else:
v = (True if int(v) == 1 else False)
# lists
elif k in ('lv_ancestors', 'lv_descendants', 'lv_full_ancestors', 'lv_full_descendants', 'lv_lockargs',
'lv_modules', 'lv_permissions', 'lv_tags'):
v = [i.strip() for i in v.split(',') if i.strip() != '']
# date time strings
elif k in ('lv_time', ):
v = datetime.datetime.strptime(v, '%Y-%m-%d %H:%M:%S %z')
elif v.strip() == '':
v = None
info[k] = v
self.info = info
_logger.debug('Rendered info: {0}'.format(info))
return(None)


class PV(object):
def __init__(self, pv_xml, partobj):
self.xml = pv_xml
_logger.debug('pv_xml: {0}'.format(etree.tostring(self.xml, with_tail = False).decode('utf-8')))
self.id = self.xml.attrib('id')
self.source = self.xml.attrib('source')
self.device = partobj
if not isinstance(self.device, (block.Disk,
block.Partition,
luks.LUKS,
mdadm.Array)):
_logger.error(('partobj must be of type '
'aif.disk.block.Disk, '
'aif.disk.block.Partition, '
'aif.disk.luks.LUKS, or'
'aif.disk.mdadm.Array.'))
raise TypeError('Invalid partobj type')
self.devpath = self.device.devpath
self.is_pooled = False
self.meta = None
self._parseMeta()

def _parseMeta(self):
# Note, the "UUID" for LVM is *not* a true UUID (RFC4122) so we don't convert it.
# https://unix.stackexchange.com/questions/173722/what-is-the-uuid-format-used-by-lvm
meta = {}
cmd = ['pvs',
'--binary',
'--nosuffix',
'--units', 'b',
'--options', '+pvall',
'--reportformat', 'json',
self.devpath]
_meta = subprocess.run(cmd, stdout = subprocess.PIPE, stderr = subprocess.PIPE)
_logger.info('Executed: {0}'.format(' '.join(_meta.args)))
if _meta.returncode != 0:
_logger.warning('Command returned non-zero status')
_logger.debug('Exit status: {0}'.format(str(_meta.returncode)))
for a in ('stdout', 'stderr'):
x = getattr(cmd, a)
if x:
_logger.debug('{0}: {1}'.format(a.upper(), x.decode('utf-8').strip()))
self.meta = None
self.is_pooled = False
return(None)
_meta = json.loads(_meta.stdout.decode('utf-8'))['report'][0]['pv'][0]
for k, v in _meta.items():
# We *could* regex this but the pattern would be a little more complex than idea,
# especially for such predictable strings.
# These are ints.
if k in ('dev_size', 'pe_start', 'pv_ba_size', 'pv_ba_start', 'pv_ext_vsn', 'pv_free', 'pv_major',
'pv_mda_count', 'pv_mda_free', 'pv_mda_size', 'pv_mda_used_count', 'pv_minor', 'pv_pe_alloc_count',
'pv_pe_alloc_count', 'pv_size', 'pv_used'):
v = int(v)
# These are boolean.
elif k in ('pv_allocatable', 'pv_duplicate', 'pv_exported', 'pv_in_use', 'pv_missing'):
v = (True if int(v) == 1 else False)
# This is a list.
elif k == 'pv_tags':
v = [i.strip() for i in v.split(',') if i.strip() != '']
elif v.strip() == '':
v = None
meta[k] = v
self.meta = meta
self.is_pooled = True
_logger.debug('Rendered meta: {0}'.format(meta))
return(None)

def prepare(self):
if not self.meta:
self._parseMeta()
# *Technically*, we should vgreduce before pvremove, but eff it.
cmd_str = ['pvremove',
'--force', '--force',
'--reportformat', 'json',
self.devpath]
cmd = subprocess.run(cmd_str, stdout = subprocess.PIPE, stderr = subprocess.PIPE)
_logger.info('Executed: {0}'.format(' '.join(cmd.args)))
if cmd.returncode != 0:
_logger.warning('Command returned non-zero status')
_logger.debug('Exit status: {0}'.format(str(cmd.returncode)))
for a in ('stdout', 'stderr'):
x = getattr(cmd, a)
if x:
_logger.debug('{0}: {1}'.format(a.upper(), x.decode('utf-8').strip()))
raise RuntimeError('Failed to remove PV successfully')
cmd_str = ['pvcreate',
'--reportformat', 'json',
self.devpath]
cmd = subprocess.run(cmd_str)
_logger.info('Executed: {0}'.format(' '.join(cmd.args)))
if cmd.returncode != 0:
_logger.warning('Command returned non-zero status')
_logger.debug('Exit status: {0}'.format(str(cmd.returncode)))
for a in ('stdout', 'stderr'):
x = getattr(cmd, a)
if x:
_logger.debug('{0}: {1}'.format(a.upper(), x.decode('utf-8').strip()))
raise RuntimeError('Failed to format successfully')
self._parseMeta()
return(None)


class VG(object):
def __init__(self, vg_xml):
self.xml = vg_xml
_logger.debug('vg_xml: {0}'.format(etree.tostring(self.xml, with_tail = False).decode('utf-8')))
self.id = self.xml.attrib('id')
self.name = self.xml.attrib('name')
self.pe_size = self.xml.attrib.get('extentSize', 0)
if self.pe_size:
x = dict(zip(('from_bgn', 'size', 'type'),
aif.utils.convertSizeUnit(self.pe_size)))
if x['type']:
self.pe_size = aif.utils.size.convertStorage(self.pe_size,
x['type'],
target = 'B')
if not aif.utils.isPowerofTwo(self.pe_size):
_logger.error('The PE size must be a power of two (in bytes).')
raise ValueError('Invalid PE value')
self.lvs = []
self.pvs = []
# self.tags = []
# for te in self.xml.findall('tags/tag'):
# self.tags.append(te.text)
self.devpath = self.name
self.info = None
self.created = False

def addPV(self, pvobj):
if not isinstance(pvobj, PV):
_logger.error('pvobj must be of type aif.disk.lvm.PV.')
raise TypeError('Invalid pvbobj type')
pvobj.prepare()
self.pvs.append(pvobj)
return(None)

def create(self):
if not self.pvs:
_logger.error('Cannot create a VG with no PVs.')
raise RuntimeError('Missing PVs')
cmd_str = ['vgcreate',
'--reportformat', 'json',
'--physicalextentsize', '{0}b'.format(self.pe_size),
self.name]
for pv in self.pvs:
cmd_str.append(pv.devpath)
cmd = subprocess.run(cmd_str, stdout = subprocess.PIPE, stderr = subprocess.PIPE)
_logger.info('Executed: {0}'.format(' '.join(cmd.args)))
if cmd.returncode != 0:
_logger.warning('Command returned non-zero status')
_logger.debug('Exit status: {0}'.format(str(cmd.returncode)))
for a in ('stdout', 'stderr'):
x = getattr(cmd, a)
if x:
_logger.debug('{0}: {1}'.format(a.upper(), x.decode('utf-8').strip()))
raise RuntimeError('Failed to create VG successfully')
for pv in self.pvs:
pv._parseMeta()
self.created = True
self.updateInfo()
return(None)

def createLV(self, lv_xml = None):
if not self.created:
_logger.info('Attempted to add an LV to a VG before it was created.')
raise RuntimeError('LV before VG creation')
# If lv_xml is None, we loop through our own XML.
if lv_xml:
_logger.debug('Explicit lv_xml specified: {0}'.format(etree.tostring(lv_xml,
with_tail = False).decode('utf-8')))
lv = LV(lv_xml, self)
lv.create()
# self.lvs.append(lv)
else:
for le in self.xml.findall('logicalVolumes/lv'):
_logger.debug('Found lv element: {0}'.format(etree.tostring(le, with_tail = False).decode('utf-8')))
lv = LV(le, self)
lv.create()
# self.lvs.append(lv)
self.updateInfo()
return(None)

def start(self):
_logger.info('Activating VG: {0}.'.format(self.name))
cmd_str = ['vgchange',
'--activate', 'y',
'--reportformat', 'json',
self.name]
cmd = subprocess.run(cmd_str, stdout = subprocess.PIPE, stderr = subprocess.PIPE)
_logger.info('Executed: {0}'.format(' '.join(cmd.args)))
if cmd.returncode != 0:
_logger.warning('Command returned non-zero status')
_logger.debug('Exit status: {0}'.format(str(cmd.returncode)))
for a in ('stdout', 'stderr'):
x = getattr(cmd, a)
if x:
_logger.debug('{0}: {1}'.format(a.upper(), x.decode('utf-8').strip()))
raise RuntimeError('Failed to activate VG successfully')
self.updateInfo()
return(None)

def stop(self):
_logger.info('Deactivating VG: {0}.'.format(self.name))
cmd_str = ['vgchange',
'--activate', 'n',
'--reportformat', 'json',
self.name]
cmd = subprocess.run(cmd_str, stdout = subprocess.PIPE, stderr = subprocess.PIPE)
_logger.info('Executed: {0}'.format(' '.join(cmd.args)))
if cmd.returncode != 0:
_logger.warning('Command returned non-zero status')
_logger.debug('Exit status: {0}'.format(str(cmd.returncode)))
for a in ('stdout', 'stderr'):
x = getattr(cmd, a)
if x:
_logger.debug('{0}: {1}'.format(a.upper(), x.decode('utf-8').strip()))
raise RuntimeError('Failed to deactivate VG successfully')
self.updateInfo()
return(None)

def updateInfo(self):
if not self.created:
_logger.warning('Attempted to updateInfo on a VG not created yet.')
return(None)
info = {}
cmd_str = ['vgs',
'--binary',
'--nosuffix',
'--units', 'b',
'--options', '+vgall',
'--reportformat', 'json',
self.name]
_info = subprocess.run(cmd_str, stdout = subprocess.PIPE, stderr = subprocess.PIPE)
_logger.info('Executed: {0}'.format(' '.join(_info.args)))
if _info.returncode != 0:
_logger.warning('Command returned non-zero status')
_logger.debug('Exit status: {0}'.format(str(_info.returncode)))
for a in ('stdout', 'stderr'):
x = getattr(_info, a)
if x:
_logger.debug('{0}: {1}'.format(a.upper(), x.decode('utf-8').strip()))
self.info = None
self.created = False
return(None)
_info = json.loads(_info.stdout.decode('utf-8'))['report'][0]['vg'][0]
for k, v in _info.items():
# ints
if k in ('lv_count', 'max_lv', 'max_pv', 'pv_count', 'snap_count', 'vg_extent_count', 'vg_extent_size',
'vg_free', 'vg_free_count', 'vg_mda_count', 'vg_mda_free', 'vg_mda_size', 'vg_mda_used_count',
'vg_missing_pv_count', 'vg_seqno', 'vg_size'):
v = int(v)
# booleans
elif k in ('vg_clustered', 'vg_exported', 'vg_extendable', 'vg_partial', 'vg_shared'):
v = (True if int(v) == 1 else False)
# lists
elif k in ('vg_lock_args', 'vg_permissions', 'vg_tags'): # not 100% sure about vg_permissions...
v = [i.strip() for i in v.split(',') if i.strip() != '']
elif v.strip() == '':
v = None
info[k] = v
self.info = info
_logger.debug('Rendered info: {0}'.format(info))
return(None)

2
aif/disk/main.py Normal file
View File

@ -0,0 +1,2 @@
# TODO
# Remember to genfstab!

241
aif/disk/mdadm.py Normal file
View File

@ -0,0 +1,241 @@
import datetime
import logging
import os
import re
import uuid
##
from lxml import etree
##
import aif.utils
import aif.constants
from . import _common
import aif.disk.block as block
import aif.disk.luks as luks
import aif.disk.lvm as lvm


_logger = logging.getLogger(__name__)


_BlockDev = _common.BlockDev


class Member(object):
def __init__(self, member_xml, partobj):
self.xml = member_xml
_logger.debug('member_xml: {0}'.format(etree.tostring(self.xml, with_tail = False).decode('utf-8')))
self.device = partobj
if not isinstance(self.device, (block.Disk,
block.Partition,
Array,
luks.LUKS,
lvm.LV)):
_logger.error(('partobj must be of type '
'aif.disk.block.Disk, '
'aif.disk.block.Partition, '
'aif.disk.luks.LUKS, '
'aif.disk.lvm.LV, or'
'aif.disk.mdadm.Array.'))
raise TypeError('Invalid partobj type')
_common.addBDPlugin('mdraid')
self.devpath = self.device.devpath
self.is_superblocked = None
self.superblock = None
self._parseDeviceBlock()

def _parseDeviceBlock(self):
_logger.info('Parsing {0} device block metainfo.'.format(self.devpath))
# TODO: parity with mdadm_fallback.Member._parseDeviceBlock
# key names currently (probably) don't match and need to confirm the information's all present
block = {}
try:
_block = _BlockDev.md.examine(self.devpath)
except _BlockDev.MDRaidError:
_logger.debug('Member device is not a member yet.')
self.is_superblocked = False
self.superblock = None
return(None)
for k in dir(_block):
if k.startswith('_'):
continue
elif k in ('copy', 'eval'):
continue
v = getattr(_block, k)
if k == 'level':
v = int(re.sub(r'^raid', '', v))
elif k == 'update_time':
v = datetime.datetime.fromtimestamp(v)
elif re.search('^(dev_)?uuid$', k):
v = uuid.UUID(hex = v)
block[k] = v
self.superblock = block
_logger.debug('Rendered superblock info: {0}'.format(block))
self.is_superblocked = True
return(None)

def prepare(self):
try:
_BlockDev.md.denominate(self.devpath)
except _BlockDev.MDRaidError:
pass
_BlockDev.md.destroy(self.devpath)
self._parseDeviceBlock()
return(None)


class Array(object):
def __init__(self, array_xml, homehost, devpath = None):
self.xml = array_xml
_logger.debug('array_xml: {0}'.format(etree.tostring(array_xml, with_tail = False).decode('utf-8')))
self.id = self.xml.attrib['id']
self.level = int(self.xml.attrib['level'])
if self.level not in aif.constants.MDADM_SUPPORTED_LEVELS:
_logger.error(('RAID level ({0}) must be one of: '
'{1}.').format(self.level,
', '.join([str(i) for i in aif.constants.MDADM_SUPPORTED_LEVELS])))
raise ValueError('Invalid RAID level')
self.metadata = self.xml.attrib.get('meta', '1.2')
if self.metadata not in aif.constants.MDADM_SUPPORTED_METADATA:
_logger.error(('Metadata version ({0}) must be one of: '
'{1}.').format(self.metadata, ', '.join(aif.constants.MDADM_SUPPORTED_METADATA)))
raise ValueError('Invalid metadata version')
_common.addBDPlugin('mdraid')
self.chunksize = int(self.xml.attrib.get('chunkSize', 512))
if self.level in (4, 5, 6, 10):
if not aif.utils.isPowerofTwo(self.chunksize):
# TODO: warn instead of raise exception? Will mdadm lose its marbles if it *isn't* a proper number?
_logger.error('Chunksize ({0}) must be a power of 2 for RAID level {1}.'.format(self.chunksize,
self.level))
raise ValueError('Invalid chunksize')
if self.level in (0, 4, 5, 6, 10):
if not aif.utils.hasSafeChunks(self.chunksize):
# TODO: warn instead of raise exception? Will mdadm lose its marbles if it *isn't* a proper number?
_logger.error('Chunksize ({0}) must be divisible by 4 for RAID level {1}'.format(self.chunksize,
self.level))
raise ValueError('Invalid chunksize')
self.layout = self.xml.attrib.get('layout', 'none')
if self.level in aif.constants.MDADM_SUPPORTED_LAYOUTS.keys():
matcher, layout_default = aif.constants.MDADM_SUPPORTED_LAYOUTS[self.level]
if not matcher.search(self.layout):
if layout_default:
self.layout = layout_default
else:
_logger.warning('Did not detect a valid layout.')
self.layout = None
else:
self.layout = None
self.name = self.xml.attrib['name']
self.fullname = '{0}:{1}'.format(self.homehost, self.name)
self.devpath = devpath
if not self.devpath:
self.devpath = '/dev/md/{0}'.format(self.name)
self.updateStatus()
self.homehost = homehost
self.members = []
self.state = None
self.info = None

def addMember(self, memberobj):
if not isinstance(memberobj, Member):
_logger.error('memberobj must be of type aif.disk.mdadm.Member.')
raise TypeError('Invalid memberobj type')
memberobj.prepare()
self.members.append(memberobj)
return(None)

def create(self):
if not self.members:
_logger.error('Cannot create an array with no members.')
raise RuntimeError('Missing members')
opts = [_BlockDev.ExtraArg.new('--homehost',
self.homehost),
_BlockDev.ExtraArg.new('--name',
self.name)]
if self.layout:
opts.append(_BlockDev.ExtraArg.new('--layout',
self.layout))
_BlockDev.md.create(self.name,
str(self.level),
[i.devpath for i in self.members],
0,
self.metadata,
True,
(self.chunksize * 1024),
opts)
for m in self.members:
m._parseDeviceBlock()
self.updateStatus()
self.writeConf()
self.devpath = self.info['device']
self.state = 'new'
return(None)

def start(self, scan = False):
_logger.info('Starting array {0}.'.format(self.name))
if not any((self.members, self.devpath)):
_logger.error('Cannot assemble an array with no members (for hints) or device path.')
raise RuntimeError('Cannot start unspecified array')
if scan:
target = None
else:
target = self.name
_BlockDev.md.activate(target,
[i.devpath for i in self.members], # Ignored if scan mode enabled
None,
True,
None)
self.state = 'assembled'
return(None)

def stop(self):
_logger.error('Stopping aray {0}.'.format(self.name))
_BlockDev.md.deactivate(self.name)
self.state = 'disassembled'
return(None)

def updateStatus(self):
_status = _BlockDev.md.detail(self.name)
# TODO: parity with mdadm_fallback.Array.updateStatus
# key names currently (probably) don't match and need to confirm the information's all present
info = {}
for k in dir(_status):
if k.startswith('_'):
continue
elif k in ('copy',):
continue
v = getattr(_status, k)
if k == 'level':
v = int(re.sub(r'^raid', '', v))
elif k == 'creation_time':
# TODO: Is this portable/correct? Or do I need to do something like '%a %b %d %H:%M:%s %Y'?
v = datetime.datetime.strptime(v, '%c')
elif k == 'uuid':
v = uuid.UUID(hex = v)
info[k] = v
self.info = info
_logger.debug('Rendered info: {0}'.format(info))
return(None)

def writeConf(self, chroot_base):
conf = os.path.join(chroot_base, 'etc', 'mdadm.conf')
with open(conf, 'r') as fh:
conflines = fh.read().splitlines()
arrayinfo = ('ARRAY '
'{device} '
'metadata={metadata} '
'name={name} '
'UUID={converted_uuid}').format(**self.info,
converted_uuid = _BlockDev.md.get_md_uuid(str(self.info['uuid'])))
if arrayinfo not in conflines:
r = re.compile(r'^ARRAY\s+{0}'.format(self.info['device']))
nodev = True
for l in conflines:
if r.search(l):
nodev = False
# TODO: warning and skip instead?
_logger.error('An array already exists with that name but not with the same opts/GUID/etc.')
raise RuntimeError('Duplicate array')
if nodev:
with open(conf, 'a') as fh:
fh.write('{0}\n'.format(arrayinfo))
return(None)

324
aif/disk/mdadm_fallback.py Normal file
View File

@ -0,0 +1,324 @@
import copy
import datetime
import logging
import os
import re
import subprocess
import uuid
##
import mdstat
from lxml import etree
##
import aif.disk.block_fallback as block
import aif.disk.luks_fallback as luks
import aif.disk.lvm_fallback as lvm
import aif.utils
import aif.constants


_logger = logging.getLogger(__name__)


_mdblock_size_re = re.compile(r'^(?P<sectors>[0-9]+)\s+'
r'\((?P<GiB>[0-9.]+)\s+GiB\s+'
r'(?P<GB>[0-9.]+)\s+GB\)')
_mdblock_unused_re = re.compile(r'^before=(?P<before>[0-9]+)\s+sectors,'
r'\s+after=(?P<after>[0-9]+)\s+sectors$')
_mdblock_badblock_re = re.compile(r'^(?P<entries>[0-9]+)\s+entries'
r'[A-Za-z\s]+'
r'(?P<offset>[0-9]+)\s+sectors$')


class Member(object):
def __init__(self, member_xml, partobj):
self.xml = member_xml
_logger.debug('member_xml: {0}'.format(etree.tostring(self.xml, with_tail = False).decode('utf-8')))
self.device = partobj
if not isinstance(self.device, (block.Partition,
block.Disk,
Array,
lvm.LV,
luks.LUKS)):
_logger.error(('partobj must be of type '
'aif.disk.block.Partition, '
'aif.disk.block.Disk, or '
'aif.disk.mdadm.Array'))
raise TypeError('Invalid partobj type')
self.devpath = self.device.devpath
self.is_superblocked = None
self.superblock = None
self._parseDeviceBlock()

def _parseDeviceBlock(self):
# I can't believe the mdstat module doesn't really have a way to do this.
_super = subprocess.run(['mdadm', '--examine', self.devpath],
stdout = subprocess.PIPE,
stderr = subprocess.PIPE)
_logger.info('Executed: {0}'.format(' '.join(_super.args)))
if _super.returncode != 0:
_logger.warning('Command returned non-zero status')
_logger.debug('Exit status: {0}'.format(str(_super.returncode)))
for a in ('stdout', 'stderr'):
x = getattr(_super, a)
if x:
_logger.debug('{0}: {1}'.format(a.upper(), x.decode('utf-8').strip()))
self.is_superblocked = False
self.superblock = None
return(None)
block = {}
for idx, line in enumerate(super.stdout.decode('utf-8').splitlines()):
line = line.strip()
if idx == 0: # This is just the same as self.device.devpath.
continue
if line == '':
continue
k, v = [i.strip() for i in line.split(':', 1)]
orig_k = k
k = re.sub(r'\s+', '_', k.lower())
if k in ('raid_devices', 'events'):
v = int(v)
elif k == 'magic':
v = bytes.fromhex(v)
elif k == 'name':
# TODO: Will this *always* give 2 values?
name, local_to = [i.strip() for i in v.split(None, 1)]
local_to = re.sub(r'[()]', '', local_to)
v = (name, local_to)
elif k == 'raid_level':
v = int(re.sub(r'^raid', '', v))
elif k == 'checksum':
cksum, status = [i.strip() for i in v.split('-')]
v = (bytes.fromhex(cksum), status)
elif k == 'unused_space':
r = _mdblock_unused_re.search(v)
if not r:
_logger.error('Could not parse {0} for {1}\'s superblock'.format(orig_k, self.devpath))
raise RuntimeError('Could not parse unused space in superblock')
v = {}
for i in ('before', 'after'):
v[i] = int(r.group(i)) # in sectors
elif k == 'bad_block_log':
k = 'badblock_log_entries'
r = _mdblock_badblock_re.search(v)
if not r:
_logger.error('Could not parse {0} for {1}\'s superblock'.format(orig_k, self.devpath))
raise RuntimeError('Could not parse badblocks in superblock')
v = {}
for i in ('entries', 'offset'):
v[i] = int(r.group(i)) # offset is in sectors
elif k == 'array_state':
v = [i.strip() for i in v.split(None, 1)][0].split()
elif k == 'device_uuid':
v = uuid.UUID(hex = v.replace(':', '-'))
elif re.search((r'^(creation|update)_time$'), k):
# TODO: Is this portable/correct? Or do I need to do '%a %b %d %H:%M:%s %Y'?
v = datetime.datetime.strptime(v, '%c')
elif re.search(r'^((avail|used)_dev|array)_size$', k):
r = _mdblock_size_re.search(v)
if not r:
_logger.error('Could not parse {0} for {1}\'s superblock'.format(orig_k, self.devpath))
raise RuntimeError('Could not parse size value in superblock')
v = {}
for i in ('sectors', 'GB', 'GiB'):
v[i] = float(r.group(i))
if i == 'sectors':
v[i] = int(v[i])
elif re.search(r'^(data|super)_offset$', k):
v = int(v.split(None, 1)[0])
block[k] = v
self.superblock = block
_logger.debug('Rendered superblock info: {0}'.format(block))
self.is_superblocked = True
return(None)

def prepare(self):
if self.is_superblocked:
cmd = subprocess.run(['mdadm', '--misc', '--zero-superblock', self.devpath],
stdout = subprocess.PIPE,
stderr = subprocess.PIPE)
_logger.info('Executed: {0}'.format(' '.join(cmd.args)))
if cmd.returncode != 0:
_logger.warning('Command returned non-zero status')
_logger.debug('Exit status: {0}'.format(str(cmd.returncode)))
for a in ('stdout', 'stderr'):
x = getattr(cmd, a)
if x:
_logger.debug('{0}: {1}'.format(a.upper(), x.decode('utf-8').strip()))
self.is_superblocked = False
self._parseDeviceBlock()
return(None)


class Array(object):
def __init__(self, array_xml, homehost, devpath = None):
self.xml = array_xml
_logger.debug('array_xml: {0}'.format(etree.tostring(self.xml, with_tail = False).decode('utf-8')))
self.id = self.xml.attrib['id']
self.level = int(self.xml.attrib['level'])
if self.level not in aif.constants.MDADM_SUPPORTED_LEVELS:
_logger.error(('RAID level ({0}) must be one of: '
'{1}').format(self.level, ', '.join([str(i) for i in aif.constants.MDADM_SUPPORTED_LEVELS])))
raise ValueError('Invalid RAID level')
self.metadata = self.xml.attrib.get('meta', '1.2')
if self.metadata not in aif.constants.MDADM_SUPPORTED_METADATA:
_logger.error(('Metadata version ({0}) must be one of: '
'{1}').format(self.metadata, ', '.join(aif.constants.MDADM_SUPPORTED_METADATA)))
raise ValueError('Invalid metadata version')
self.chunksize = int(self.xml.attrib.get('chunkSize', 512))
if self.level in (4, 5, 6, 10):
if not aif.utils.isPowerofTwo(self.chunksize):
# TODO: warn instead of raise exception? Will mdadm lose its marbles if it *isn't* a proper number?
_logger.error('Chunksize ({0}) must be a power of 2 for RAID level {1}.'.format(self.chunksize,
self.level))
raise ValueError('Invalid chunksize')
if self.level in (0, 4, 5, 6, 10):
if not aif.utils.hasSafeChunks(self.chunksize):
# TODO: warn instead of raise exception? Will mdadm lose its marbles if it *isn't* a proper number?
_logger.error('Chunksize ({0}) must be divisible by 4 for RAID level {1}'.format(self.chunksize,
self.level))
raise ValueError('Invalid chunksize')
self.layout = self.xml.attrib.get('layout', 'none')
if self.level in aif.constants.MDADM_SUPPORTED_LAYOUTS.keys():
matcher, layout_default = aif.constants.MDADM_SUPPORTED_LAYOUTS[self.level]
if not matcher.search(self.layout):
if layout_default:
self.layout = layout_default
else:
_logger.warning('Did not detect a valid layout.')
self.layout = None
else:
self.layout = None
self.name = self.xml.attrib['name']
self.devpath = devpath
if not self.devpath:
self.devpath = '/dev/md/{0}'.format(self.name)
self.updateStatus()
self.homehost = homehost
self.members = []
self.state = None
self.info = None

def addMember(self, memberobj):
if not isinstance(memberobj, Member):
_logger.error('memberobj must be of type aif.disk.mdadm.Member')
raise TypeError('Invalid memberobj type')
memberobj.prepare()
self.members.append(memberobj)
return(None)

def create(self):
if not self.members:
_logger.error('Cannot create an array with no members.')
raise RuntimeError('Missing members')
cmd_str = ['mdadm', '--create',
'--name={0}'.format(self.name),
'--bitmap=internal',
'--level={0}'.format(self.level),
'--metadata={0}'.format(self.metadata),
'--chunk={0}'.format(self.chunksize),
'--homehost={0}'.format(self.homehost),
'--raid-devices={0}'.format(len(self.members))]
if self.layout:
cmd_str.append('--layout={0}'.format(self.layout))
cmd_str.append(self.devpath)
for m in self.members:
cmd_str.append(m.devpath)
# TODO: logging!
cmd = subprocess.run(cmd_str, stdout = subprocess.PIPE, stderr = subprocess.PIPE)
_logger.info('Executed: {0}'.format(' '.join(cmd.args)))
if cmd.returncode != 0:
_logger.warning('Command returned non-zero status')
_logger.debug('Exit status: {0}'.format(str(cmd.returncode)))
for a in ('stdout', 'stderr'):
x = getattr(cmd, a)
if x:
_logger.debug('{0}: {1}'.format(a.upper(), x.decode('utf-8').strip()))
raise RuntimeError('Failed to create array successfully')
for m in self.members:
m._parseDeviceBlock()
self.updateStatus()
self.writeConf()
self.state = 'new'
return(None)

def start(self, scan = False):
_logger.info('Starting array {0}.'.format(self.name))
if not any((self.members, self.devpath)):
_logger.error('Cannot assemble an array with no members (for hints) or device path.')
raise RuntimeError('Cannot start unspecified array')
cmd_str = ['mdadm', '--assemble', self.devpath]
if not scan:
for m in self.members:
cmd_str.append(m.devpath)
else:
cmd_str.append('--scan')
cmd = subprocess.run(cmd_str, stdout = subprocess.PIPE, stderr = subprocess.PIPE)
_logger.info('Executed: {0}'.format(' '.join(cmd.args)))
if cmd.returncode != 0:
_logger.warning('Command returned non-zero status')
_logger.debug('Exit status: {0}'.format(str(cmd.returncode)))
for a in ('stdout', 'stderr'):
x = getattr(cmd, a)
if x:
_logger.debug('{0}: {1}'.format(a.upper(), x.decode('utf-8').strip()))
raise RuntimeError('Failed to start array successfully')
self.updateStatus()
self.state = 'assembled'
return(None)

def stop(self):
_logger.error('Stopping aray {0}.'.format(self.name))
cmd = subprocess.run(['mdadm', '--stop', self.devpath],
stdout = subprocess.PIPE,
stderr = subprocess.PIPE)
_logger.info('Executed: {0}'.format(' '.join(cmd.args)))
if cmd.returncode != 0:
_logger.warning('Command returned non-zero status')
_logger.debug('Exit status: {0}'.format(str(cmd.returncode)))
for a in ('stdout', 'stderr'):
x = getattr(cmd, a)
if x:
_logger.debug('{0}: {1}'.format(a.upper(), x.decode('utf-8').strip()))
raise RuntimeError('Failed to stop array successfully')
self.state = 'disassembled'
return(None)

def updateStatus(self):
_info = mdstat.parse()
for k, v in _info['devices'].items():
if k != self.name:
del(_info['devices'][k])
self.info = copy.deepcopy(_info)
_logger.debug('Rendered info: {0}'.format(_info))
return(None)

def writeConf(self, chroot_base):
conf = os.path.join(chroot_base, 'etc', 'mdadm.conf')
with open(conf, 'r') as fh:
conflines = fh.read().splitlines()
cmd = subprocess.run(['mdadm', '--detail', '--brief', self.devpath],
stdout = subprocess.PIPE,
stderr = subprocess.PIPE)
_logger.info('Executed: {0}'.format(' '.join(cmd.args)))
if cmd.returncode != 0:
_logger.warning('Command returned non-zero status')
_logger.debug('Exit status: {0}'.format(str(cmd.returncode)))
for a in ('stdout', 'stderr'):
x = getattr(cmd, a)
if x:
_logger.debug('{0}: {1}'.format(a.upper(), x.decode('utf-8').strip()))
raise RuntimeError('Failed to get information about array successfully')
arrayinfo = cmd.stdout.decode('utf-8').strip()
if arrayinfo not in conflines:
r = re.compile(r'^ARRAY\s+{0}'.format(self.devpath))
nodev = True
for l in conflines:
if r.search(l):
nodev = False
# TODO: warning and skip instead?
_logger.error('An array already exists with that name but not with the same opts/GUID/etc.')
raise RuntimeError('Duplicate array')
if nodev:
with open(conf, 'a') as fh:
fh.write('{0}\n'.format(arrayinfo))
return(None)

73
aif/envsetup.py Normal file
View File

@ -0,0 +1,73 @@
# This can set up an environment at runtime.
# This removes the necessity of extra libs to be installed persistently.
# However, it is recommended that you install all dependencies in the system itself, because some aren't available
# through pip/PyPi.
# Before you hoot and holler about this, Let's Encrypt's certbot-auto does the same thing.
# Except I segregate it out even further; I don't even install pip into the system python.

import ensurepip
import json
import logging
import os
import subprocess
import sys
import tempfile
import venv
##
import aif.constants_fallback


_logger = logging.getLogger(__name__)


class EnvBuilder(object):
def __init__(self):
self.vdir = tempfile.mkdtemp(prefix = '.aif_', suffix = '_VENV')
self.venv = venv.create(self.vdir, system_site_packages = True, clear = True, with_pip = True)
ensurepip.bootstrap(root = self.vdir)
# pip does some dumb env var things and doesn't clean up after itself.
for v in ('PIP_CONFIG_FILE', 'ENSUREPIP_OPTIONS', 'PIP_REQ_TRACKER', 'PLAT'):
if os.environ.get(v):
del(os.environ[v])
moddir_raw = subprocess.run([os.path.join(self.vdir,
'bin',
'python3'),
'-c',
('import site; '
'import json; '
'print(json.dumps(site.getsitepackages(), indent = 4))')],
stdout = subprocess.PIPE,
stderr = subprocess.PIPE)
_logger.info('Executed: {0}'.format(' '.join(moddir_raw.args)))
if moddir_raw.returncode != 0:
_logger.warning('Command returned non-zero status')
_logger.debug('Exit status: {0}'.format(str(moddir_raw.returncode)))
for a in ('stdout', 'stderr'):
x = getattr(moddir_raw, a)
if x:
_logger.debug('{0}: {1}'.format(a.upper(), x.decode('utf-8').strip()))
raise RuntimeError('Failed to establish environment successfully')
self.modulesdir = json.loads(moddir_raw.stdout.decode('utf-8'))[0]
# This is SO. DUMB. WHY DO I HAVE TO CALL PIP FROM A SHELL. IT'S WRITTEN IN PYTHON.
# https://pip.pypa.io/en/stable/user_guide/#using-pip-from-your-program
for m in aif.constants_fallback.EXTERNAL_DEPS:
pip_cmd = [os.path.join(self.vdir,
'bin',
'python3'),
'-m',
'pip',
'install',
'--disable-pip-version-check',
m]
cmd = subprocess.run(pip_cmd, stdout = subprocess.PIPE, stderr = subprocess.PIPE)
_logger.info('Executed: {0}'.format(' '.join(cmd.args)))
if cmd.returncode != 0:
_logger.warning('Command returned non-zero status')
_logger.debug('Exit status: {0}'.format(str(cmd.returncode)))
for a in ('stdout', 'stderr'):
x = getattr(cmd, a)
if x:
_logger.debug('{0}: {1}'.format(a.upper(), x.decode('utf-8').strip()))
raise RuntimeError('Failed to install module successfully')
# And now make it available to other components.
sys.path.insert(1, self.modulesdir)

47
aif/log.py Normal file
View File

@ -0,0 +1,47 @@
import logging
import logging.handlers
import os
##
try:
# https://www.freedesktop.org/software/systemd/python-systemd/journal.html#journalhandler-class
from systemd import journal
_has_journald = True
except ImportError:
_has_journald = False
##
from . import constants_fallback

_cfg_args = {'handlers': [],
'level': logging.DEBUG} # TEMPORARY FOR TESTING
if _has_journald:
# There were some weird changes somewhere along the line.
try:
# But it's *probably* this one.
h = journal.JournalHandler()
except AttributeError:
h = journal.JournaldLogHandler()
# Systemd includes times, so we don't need to.
h.setFormatter(logging.Formatter(style = '{',
fmt = ('{name}:{levelname}:{name}:{filename}:'
'{funcName}:{lineno}: {message}')))
_cfg_args['handlers'].append(h)
# Logfile
# Set up the permissions beforehand.
os.makedirs(os.path.dirname(constants_fallback.DEFAULT_LOGFILE), exist_ok = True)
os.chmod(constants_fallback.DEFAULT_LOGFILE, 0o0600)
h = logging.handlers.RotatingFileHandler(constants_fallback.DEFAULT_LOGFILE,
encoding = 'utf8',
# Disable rotating for now.
# maxBytes = 50000000000,
# backupCount = 30
)
h.setFormatter(logging.Formatter(style = '{',
fmt = ('{asctime}:'
'{levelname}:{name}:{filename}:'
'{funcName}:{lineno}: {message}')))
_cfg_args['handlers'].append(h)

logging.basicConfig(**_cfg_args)
logger = logging.getLogger()

logger.info('Logging initialized.')

72
aif/network/__init__.py Normal file
View File

@ -0,0 +1,72 @@
import logging
import os
##
from lxml import etree
##
from . import _common
from . import netctl
from . import networkd
from . import networkmanager

# No longer necessary:
# try:
# from . import _common
# except ImportError:
# pass # GI isn't supported, so we don't even use a fallback.

# http://0pointer.net/blog/the-new-sd-bus-api-of-systemd.html
# https://www.youtube.com/watch?v=ZUX9Fx8Rwzg
# https://www.youtube.com/watch?v=lBQgMGPxqNo
# https://github.com/facebookincubator/pystemd has some unit/service examples
# try:
# from . import networkd
# except ImportError:
# from . import networkd_fallback as networkd


_logger = logging.getLogger(__name__)


class Net(object):
def __init__(self, chroot_base, network_xml):
self.xml = network_xml
# We don't bother logging the entirety of network_xml here because we do it in the respective networks
self.chroot_base = chroot_base
self.hostname = self.xml.attrib['hostname'].strip()
_logger.info('Hostname: {0}'.format(self.hostname))
self.provider = self.xml.attrib.get('provider', 'networkd').strip()
if self.provider == 'netctl':
self.provider = netctl
elif self.provider == 'nm':
self.provider = networkmanager
elif self.provider == 'networkd':
self.provider = networkd
else:
_logger.error('Unable to determine which network provider to use based on configuration.')
raise RuntimeError('Could not determine provider')
self.connections = []
self._initConns()

def _initConns(self):
for e in self.xml.xpath('ethernet|wireless'):
conn = None
if e.tag == 'ethernet':
conn = self.provider.Ethernet(e)
elif e.tag == 'wireless':
conn = self.provider.Wireless(e)
self.connections.append(conn)
_logger.info('Added connection of type {0}.'.format(type(conn).__name__))

def apply(self, chroot_base):
cfg = os.path.join(chroot_base, 'etc', 'hostname')
with open(cfg, 'w') as fh:
fh.write('{0}\n'.format(self.hostname))
os.chown(cfg, 0, 0)
os.chmod(cfg, 0o0644)
_logger.info('Wrote: {0}'.format(cfg))
for iface in self.connections:
for src, dest in iface.services.items():
realdest = os.path.join(chroot_base, dest)
os.symlink(src, realdest)
iface.writeConf(chroot_base)
return(None)

263
aif/network/_common.py Normal file
View File

@ -0,0 +1,263 @@
import binascii
import ipaddress
import logging
import os
import pathlib
import re
##
from lxml import etree
from passlib.crypto.digest import pbkdf2_hmac
from pyroute2 import IPDB
##
import aif.utils

# Not needed
# import gi
# gi.require_version('NM', '1.0')
# from gi.repository import GObject, NM, GLib


_logger = logging.getLogger('net:_common')


def canonizeEUI(phyaddr):
phyaddr = re.sub(r'[.:-]', '', phyaddr.upper().strip())
eui = ':'.join(['{0}'.format(phyaddr[i:i+2]) for i in range(0, 12, 2)])
return(eui)


def convertIpTuples(addr_xmlobj):
# These tuples follow either:
# ('dhcp'/'dhcp6'/'slaac', None, None) for auto configuration
# (ipaddress.IPv4/6Address(IP), CIDR, ipaddress.IPv4/6Address(GW)) for static configuration
if addr_xmlobj.text in ('dhcp', 'dhcp6', 'slaac'):
addr = addr_xmlobj.text.strip()
net = None
gw = None
else:
components = addr_xmlobj.text.strip().split('/')
if len(components) > 2:
_logger.error('Too many slashes in IP/CIDR string.')
raise ValueError('Invalid IP/CIDR format: {0}'.format(addr_xmlobj.text))
if len(components) == 1:
addr = ipaddress.ip_address(components[0])
if addr.version == 4:
components.append('24')
elif addr.version == 6:
components.append('64')
addr = ipaddress.ip_address(components[0])
net = ipaddress.ip_network('/'.join(components), strict = False)
try:
gw = ipaddress.ip_address(addr_xmlobj.attrib.get('gateway').strip())
except (ValueError, AttributeError):
_logger.warning(('Non-conformant gateway value (attempting automatic gateway address): '
'{0}').format(addr_xmlobj.attrib.get('gateway')))
gw = next(net.hosts())
return((addr, net, gw))


def convertPSK(ssid, passphrase):
try:
passphrase = passphrase.encode('utf-8').decode('ascii').strip('\r').strip('\n')
except UnicodeDecodeError:
_logger.error('WPA passphrase must be an ASCII string')
raise ValueError('Passed invalid encoding for WPA PSK string')
if len(ssid) > 32:
_logger.error('SSID must be <= 32 characters long.')
raise ValueError('Invalid ssid length')
if not 7 < len(passphrase) < 64:
_logger.error('Passphrase must be >= 8 and <= 32 characters long.')
raise ValueError('Invalid passphrase length')
raw_psk = pbkdf2_hmac('sha1', str(passphrase), str(ssid), 4096, 32)
hex_psk = binascii.hexlify(raw_psk)
str_psk = hex_psk.decode('utf-8')
_logger.debug('Converted ({0}){1} to {2}'.format(str(passphrase), str(ssid), str_psk))
return(str_psk)


def convertWifiCrypto(crypto_xmlobj, ssid):
crypto = {'type': crypto_xmlobj.find('type').text.strip(),
'auth': {}}
_logger.info('Parsing a WiFi crypto object.')
creds_xml = crypto_xmlobj.xpath('psk|enterprise')[0]
# if crypto['type'] in ('wpa', 'wpa2', 'wpa3'):
if crypto['type'] in ('wpa', 'wpa2'):
crypto['mode'] = creds_xml.tag
if crypto['mode'] == 'psk':
crypto['mode'] = 'personal'
else:
crypto['mode'] = None
if crypto['mode'] == 'personal':
psk_xml = creds_xml.find('psk')
if aif.utils.xmlBool(psk_xml.attrib.get('isKey', 'false')):
try:
crypto['auth']['passphrase'] = psk_xml.text.strip('\r').strip('\n')
except UnicodeDecodeError:
_logger.error('WPA-PSK passphrases must be ASCII.')
raise ValueError('Invalid WPA-PSK encoding')
crypto['auth']['psk'] = convertPSK(ssid, crypto['auth']['passphrase'])
else:
crypto['auth']['psk'] = psk_xml.text.strip().lower()
# TODO: enterprise support
# elif crypto['mode'] == 'enterprise':
# pass
_logger.debug('Rendered crypto settings: {0}'.format(crypto))
return(crypto)


def getDefIface(ifacetype):
if ifacetype == 'ethernet':
if isNotPersistent():
prefix = 'eth'
else:
prefix = 'en'
elif ifacetype == 'wireless':
prefix = 'wl'
else:
_logger.error('ifacetype must be one of "ethernet" or "wireless"')
raise ValueError('Invalid iface type')
ifname = None
with IPDB() as ipdb:
for iface in ipdb.interfaces.keys():
if iface.startswith(prefix):
ifname = iface
break
if not ifname:
_logger.warning('Unable to find default interface')
return(None)
return(ifname)


def isNotPersistent(chroot_base = '/'):
chroot_base = pathlib.Path(chroot_base)
systemd_override = chroot_base.joinpath('etc',
'systemd',
'network',
'99-default.link')
kernel_cmdline = chroot_base.joinpath('proc', 'cmdline')
devnull = chroot_base.joinpath('dev', 'null')
rootdevnull = pathlib.PosixPath('/dev/null')
if os.path.islink(systemd_override) and pathlib.Path(systemd_override).resolve() in (devnull, rootdevnull):
return(True)
cmds = aif.utils.kernelCmdline(chroot_base)
if 'net.ifnames' in cmds.keys() and cmds['net.ifnames'] == '0':
_logger.debug('System network interfaces are not persistent')
return(True)
_logger.debug('System network interfaces are persistent')
return(False)


class BaseConnection(object):
def __init__(self, iface_xml):
self.xml = iface_xml
_logger.debug('iface_xml: {0}'.format(etree.tostring(self.xml, with_tail = False).decode('utf-8')))
self.id = self.xml.attrib['id'].strip()
self.device = self.xml.attrib['device'].strip()
self.is_defroute = aif.utils.xmlBool(self.xml.attrib.get('defroute', 'false').strip())
try:
self.domain = self.xml.attrib.get('searchDomain').strip()
except AttributeError:
self.domain = None
self.dhcp_client = self.xml.attrib.get('dhcpClient', 'dhcpcd').strip()
self._cfg = None
self.connection_type = None
self.provider_type = None
self.packages = []
self.services = {}
self.resolvers = []
self.addrs = {'ipv4': [],
'ipv6': []}
self.routes = {'ipv4': [],
'ipv6': []}
self.auto = {}
for x in ('resolvers', 'routes', 'addresses'):
self.auto[x] = {}
x_xml = self.xml.find(x)
for t in ('ipv4', 'ipv6'):
if t == 'ipv6' and x == 'addresses':
self.auto[x][t] = 'slaac'
else:
self.auto[x][t] = True
if x_xml:
t_xml = x_xml.find(t)
if t_xml:
if t == 'ipv6' and x == 'addresses':
a = t_xml.attrib.get('auto', 'slaac').strip()
if a.lower() in ('false', '0', 'none'):
self.auto[x][t] = False
else:
self.auto[x][t] = a
else:
self.auto[x][t] = aif.utils.xmlBool(t_xml.attrib.get('auto', 'true').strip())
# These defaults are from the man page. However, we might want to add:
# domain-search, netbios-scope, interface-mtu, rfc3442-classless-static-routes, ntp-servers,
# dhcp6.fqdn, dhcp6.sntp-servers
# under requests and for requires, maybe:
# routers, domain-name-servers, domain-name, domain-search, host-name
self.dhcp_defaults = {
'dhclient': {'requests': {'ipv4': ('subnet-mask', 'broadcast-address', 'time-offset', 'routers',
'domain-name', 'domain-name-servers', 'host-name'),
'ipv6': ('dhcp6.name-servers',
'dhcp6.domain-search')},
'requires': {'ipv4': tuple(),
'ipv6': tuple()}},
'dhcpcd': {'default_opts': ('hostname', 'duid', 'persistent', 'slaac private', 'noipv4ll'),
# dhcpcd -V to display variables.
# "option <foo>", prepend "dhcp6_" for ipv6. if no ipv6 opts present, same are mapped to ipv6.
# But we explicitly add them for munging downstream.
'requests': {'ipv4': ('rapid_commit', 'domain_name_servers', 'domain_name', 'domain_search',
'host_name', 'classless_static_routes', 'interface_mtu'),
'ipv6': ('dhcp6_rapid_commit', 'dhcp6_domain_name_servers', 'dhcp6_domain_name',
'dhcp6_domain_search', 'dhcp6_host_name', 'dhcp6_classless_static_routes',
'dhcp6_interface_mtu')},
# "require <foo>"
'requires': {'ipv4': ('dhcp_server_identifier', ),
'ipv6': tuple()}}}
self._initAddrs()
self._initResolvers()
self._initRoutes()
_logger.info('Instantiated network provider {0}'.format(type(self).__name__))

def _initAddrs(self):
for addrtype in ('ipv4', 'ipv6'):
for a in self.xml.findall('addresses/{0}/address'.format(addrtype)):
addrset = convertIpTuples(a)
if addrset not in self.addrs[addrtype]:
self.addrs[addrtype].append(addrset)
return(None)

def _initCfg(self):
# A dummy method; this is overridden by the subclasses.
# It's honestly here to make my IDE stop complaining. :)
pass
return(None)

def _initConnCfg(self):
# A dummy method; this is overridden by the subclasses.
# It's honestly here to make my IDE stop complaining. :)
pass
return(None)

def _initResolvers(self):
resolvers_xml = self.xml.find('resolvers')
if resolvers_xml:
for r in resolvers_xml.findall('resolver'):
resolver = ipaddress.ip_address(r.text.strip())
if resolver not in self.resolvers:
self.resolvers.append(resolver)
return(None)

def _initRoutes(self):
routes_xml = self.xml.find('routes')
if routes_xml:
for addrtype in ('ipv4', 'ipv6'):
for a in self.xml.findall('routes/{0}/route'.format(addrtype)):
addrset = convertIpTuples(a)
if addrset not in self.routes[addrtype]:
self.routes[addrtype].append(addrset)
return(None)

def _writeConnCfg(self, chroot_base):
# Dummy method.
pass
return(None)

315
aif/network/netctl.py Normal file
View File

@ -0,0 +1,315 @@
import configparser
import io
import logging
import os
##
import aif.utils
from . import _common


_logger = logging.getLogger(__name__)


class Connection(_common.BaseConnection):
def __init__(self, iface_xml):
super().__init__(iface_xml)
# TODO: disabling default route is not supported in-band.
# https://bugs.archlinux.org/task/64651
# TODO: disabling autoroutes is not supported in-band.
# https://bugs.archlinux.org/task/64651
# TODO: netctl profiles only support a single gateway.
# is there a way to manually add alternative gateways?
if not self.dhcp_client:
self.dhcp_client = 'dhcpcd'
self.provider_type = 'netctl'
self.packages = {'netctl', 'openresolv'}
self.services = {('/usr/lib/systemd/system/netctl@.service'): ('etc/systemd/system'
'/multi-user.target.wants'
'/netctl@{0}.service').format(self.id)}
# Only used if we need to override default dhcp/dhcp6 behaviour. I don't *think* we can customize SLAAC?
self.chroot_dir = os.path.join('etc', 'netctl', 'custom', self.dhcp_client)
self.chroot_cfg = os.path.join(self.chroot_dir, self.id)
self.desc = None

def _initCfg(self):
_logger.info('Building config.')
if self.device == 'auto':
self.device = _common.getDefIface(self.connection_type)
self.desc = ('A {0} profile for {1} (generated by AIF-NG)').format(self.connection_type,
self.device)
self._cfg = configparser.ConfigParser(allow_no_value = True, interpolation = None)
self._cfg.optionxform = str
# configparser *requires* sections. netctl doesn't use them. We strip it when we write.
self._cfg['BASE'] = {'Description': self.desc,
'Interface': self.device,
'Connection': self.connection_type}
# Addresses
if self.auto['addresses']['ipv4']:
self.packages.add(self.dhcp_client)
self._cfg['BASE']['IP'] = 'dhcp'
self._cfg['BASE']['DHCPClient'] = self.dhcp_client
else:
if self.addrs['ipv4']:
self._cfg['BASE']['IP'] = 'static'
else:
self._cfg['BASE']['IP'] = 'no'
if self.domain:
self._cfg['BASE']['DNSSearch'] = self.domain
if self.auto['addresses']['ipv6']:
if self.auto['addresses']['ipv6'] == 'slaac':
self._cfg['BASE']['IP6'] = 'stateless'
elif self.auto['addresses']['ipv6'] == 'dhcp6':
self._cfg['BASE']['IP6'] = 'dhcp'
self._cfg['BASE']['DHCP6Client'] = self.dhcp_client
self.packages.add(self.dhcp_client)
else:
if not self.addrs['ipv6']:
self._cfg['BASE']['IP6'] = 'no'
else:
self._cfg['BASE']['IP6'] = 'static'
for addrtype in ('ipv4', 'ipv6'):
keysuffix = ('6' if addrtype == 'ipv6' else '')
addrkey = 'Address{0}'.format(keysuffix)
gwkey = 'Gateway{0}'.format(keysuffix)
str_addrs = []
if self.addrs[addrtype] and not self.auto['addresses'][addrtype]:
for ip, cidr, gw in self.addrs[addrtype]:
if not self.is_defroute:
self._cfg['BASE'][gwkey] = str(gw)
str_addrs.append("'{0}/{1}'".format(str(ip), str(cidr.prefixlen)))
self._cfg['BASE'][addrkey] = '({0})'.format(' '.join(str_addrs))
elif self.addrs[addrtype]:
if 'IPCustom' not in self._cfg['BASE']:
# TODO: do this more cleanly somehow? Might conflict with other changes earlier/later.
# Could I shlex it?
# Weird hack because netctl doesn't natively support assigning add'l addrs to
# a dhcp/dhcp6/slaac iface.
self._cfg['BASE']['IPCustom'] = []
for ip, cidr, gw in self.addrs[addrtype]:
self._cfg['BASE']['IPCustom'].append("'ip address add {0}/{1} dev {2}'".format(str(ip),
str(cidr.prefixlen),
self.device))
# Resolvers may also require a change to /etc/resolvconf.conf?
for addrtype in ('ipv4', 'ipv6'):
if self.resolvers:
resolverkey = 'DNS'
str_resolvers = []
for r in self.resolvers:
str_resolvers.append("'{0}'".format(str(r)))
self._cfg['BASE'][resolverkey] = '({0})'.format(' '.join(str_resolvers))
# Routes
for addrtype in ('ipv4', 'ipv6'):
if self.routes[addrtype]:
keysuffix = ('6' if addrtype == 'ipv6' else '')
routekey = 'Routes{0}'.format(keysuffix)
str_routes = []
for dest, net, gw in self.routes[addrtype]:
str_routes.append("'{0}/{1} via {2}'".format(str(dest),
str(net.prefixlen),
str(gw)))
self._cfg['BASE'][routekey] = '({0})'.format(' '.join(str_routes))
# Weird hack because netctl doesn't natively support assigning add'l addrs to a dhcp/dhcp6/slaac iface.
if 'IPCustom' in self._cfg['BASE'].keys() and isinstance(self._cfg['BASE']['IPCustom'], list):
self._cfg['BASE']['IPCustom'] = '({0})'.format(' '.join(self._cfg['BASE']['IPCustom']))
_logger.info('Config built successfully.')
# TODO: does this render correctly?
_logger.debug('Config: {0}'.format(dict(self._cfg['BASE'])))
return(None)

def writeConf(self, chroot_base):
systemd_base = os.path.join(chroot_base, 'etc', 'systemd', 'system')
systemd_file = os.path.join(systemd_base, 'netctl@{0}.service.d'.format(self.id), 'profile.conf')
netctl_file = os.path.join(chroot_base, 'etc', 'netctl', self.id)
for f in (systemd_file, netctl_file):
dpath = os.path.dirname(f)
os.makedirs(dpath, exist_ok = True)
os.chmod(dpath, 0o0755)
os.chown(dpath, 0, 0)
for root, dirs, files in os.walk(dpath):
for d in dirs:
fulld = os.path.join(root, d)
os.chmod(fulld, 0o0755)
os.chown(fulld, 0, 0)
systemd_cfg = configparser.ConfigParser(allow_no_value = True, interpolation = None)
systemd_cfg.optionxform = str
systemd_cfg['Unit'] = {'Description': self.desc,
'BindsTo': 'sys-subsystem-net-devices-{0}.device'.format(self.device),
'After': 'sys-subsystem-net-devices-{0}.device'.format(self.device)}
with open(systemd_file, 'w') as fh:
systemd_cfg.write(fh, space_around_delimiters = False)
_logger.info('Wrote systemd unit: {0}'.format(systemd_file))
# This is where it gets... weird.
# Gross hacky workarounds because netctl, while great for simple setups, sucks for complex/advanced ones.
no_auto = not all((self.auto['resolvers']['ipv4'],
self.auto['resolvers']['ipv6'],
self.auto['routes']['ipv4'],
self.auto['routes']['ipv6']))
no_dhcp = not any((self.auto['addresses']['ipv4'],
self.auto['addresses']['ipv6']))
if (no_auto and not no_dhcp) or (not self.is_defroute and not no_dhcp):
if self.dhcp_client == 'dhcpcd':
if not all((self.auto['resolvers']['ipv4'],
self.auto['routes']['ipv4'],
self.auto['addresses']['ipv4'])):
self._cfg['BASE']['DhcpcdOptions'] = "'--config {0}'".format(os.path.join('/', self.chroot_cfg))
if not all((self.auto['resolvers']['ipv6'],
self.auto['routes']['ipv6'],
self.auto['addresses']['ipv6'])):
self._cfg['BASE']['DhcpcdOptions6'] = "'--config {0}'".format(os.path.join('/', self.chroot_cfg))
elif self.dhcp_client == 'dhclient':
if not all((self.auto['resolvers']['ipv4'],
self.auto['routes']['ipv4'],
self.auto['addresses']['ipv4'])):
self._cfg['BASE']['DhcpcdOptions'] = "'-cf {0}'".format(os.path.join('/', self.chroot_cfg))
if not all((self.auto['resolvers']['ipv6'],
self.auto['routes']['ipv6'],
self.auto['addresses']['ipv6'])):
self._cfg['BASE']['DhcpcdOptions6'] = "'-cf {0}'".format(os.path.join('/', self.chroot_cfg))
custom_dir = os.path.join(chroot_base, self.chroot_dir)
custom_cfg = os.path.join(chroot_base, self.chroot_cfg)
os.makedirs(custom_dir, exist_ok = True)
for root, dirs, files in os.walk(custom_dir):
os.chown(root, 0, 0)
os.chmod(root, 0o0755)
for d in dirs:
dpath = os.path.join(root, d)
os.chown(dpath, 0, 0)
os.chmod(dpath, 0o0755)
for f in files:
fpath = os.path.join(root, f)
os.chown(fpath, 0, 0)
os.chmod(fpath, 0o0644)
# Modify DHCP options. WHAT a mess.
# The default requires are VERY sparse, and fine to remain unmangled for what we do.
opts = {}
for x in ('requests', 'requires'):
opts[x] = {}
for t in ('ipv4', 'ipv6'):
opts[x][t] = list(self.dhcp_defaults[self.dhcp_client][x][t])
opt_map = {
'dhclient': {
'resolvers': {
'ipv4': ('domain-name-servers', ),
'ipv6': ('dhcp6.domain-name-servers', )},
'routes': {
'ipv4': ('rfc3442-classless-static-routes', 'static-routes'),
'ipv6': tuple()}, # ???
# There is no way, as far as I can tell, to tell dhclient to NOT request an address.
'addresses': {
'ipv4': tuple(),
'ipv6': tuple()}},
'dhcpcd': {
'resolvers': {
'ipv4': ('domain_name_servers', ),
'ipv6': ('dhcp6_domain_name_servers', )},
'routes': {
'ipv4': ('classless_static_routes', 'static_routes'),
'ipv6': tuple()}, # ???
# I don't think dhcpcd lets us refuse an address.
'addresses': {
'ipv4': tuple(),
'ipv6': tuple()}}}
# This ONLY works for DHCPv6 on the IPv6 side. Not SLAAC. Netctl doesn't use a dhcp client for
# SLAAC, just iproute2. :|
# x = routers, addresses, resolvers
# t = ipv4/ipv6 dicts
# i = ipv4/ipv6 key
# v = boolean of auto
# o = each option for given auto type and IP type
for x, t in self.auto.items():
for i, v in t.items():
if not v:
for o in opt_map[self.dhcp_client][x][i]:
for n in ('requests', 'requires'):
if o in opts[n][i]:
opts[n][i].remove(o)
# We don't want the default route if we're not the default route iface.
if not self.is_defroute:
# IPv6 uses RA for the default route... We'll probably need to do that via an ExecUpPost?
# TODO.
for i in ('requests', 'requires'):
if 'routers' in opts[i]['ipv4']:
opts[i]['ipv4'].remove('routers')
if self.dhcp_client == 'dhclient':
conf = ['lease {',
' interface "{0}";'.format(self.device),
'}']
for i in ('request', 'require'):
k = '{0}s'.format(i)
optlist = []
for t in ('ipv4', 'ipv6'):
optlist.extend(opts[k][t])
if optlist:
conf.insert(-1, ' {0} {1};'.format(k, ', '.join(optlist)))
elif self.dhcp_client == 'dhcpcd':
conf = []
conf.extend(list(self.dhcp_defaults['dhcpcd']['default_opts']))
for i in ('requests', 'requires'):
if i == 'requests':
k = 'option'
else:
k = 'require'
optlist = []
optlist.extend(opts[i]['ipv4'])
optlist.extend(opts[i]['ipv6'])
# TODO: does require support comma-separated list like option does?
conf.append('{0} {1};'.format(k, ','.join(optlist)))
with open(custom_cfg, 'w') as fh:
fh.write('\n'.join(conf))
fh.write('\n')
os.chmod(custom_cfg, 0o0644)
os.chown(custom_cfg, 0, 0)
_logger.info('Wrote: {0}'.format(custom_cfg))
# And we have to strip out the section from the ini.
cfgbuf = io.StringIO()
self._cfg.write(cfgbuf, space_around_delimiters = False)
cfgbuf.seek(0, 0)
with open(netctl_file, 'w') as fh:
for line in cfgbuf.readlines():
if line.startswith('[BASE]') or line.strip() == '':
continue
fh.write(line)
os.chmod(netctl_file, 0o0600)
os.chown(netctl_file, 0, 0)
_logger.info('Wrote: {0}'.format(netctl_file))
return(None)


class Ethernet(Connection):
def __init__(self, iface_xml):
super().__init__(iface_xml)
self.connection_type = 'ethernet'
self._initCfg()


class Wireless(Connection):
def __init__(self, iface_xml):
super().__init__(iface_xml)
self.connection_type = 'wireless'
self.packages.add('wpa_supplicant')
self._initCfg()
self._initConnCfg()

def _initConnCfg(self):
self._cfg['BASE']['ESSID'] = "'{0}'".format(self.xml.attrib['essid'])
hidden = aif.utils.xmlBool(self.xml.attrib.get('hidden', 'false'))
if hidden:
self._cfg['BASE']['Hidden'] = 'yes'
try:
bssid = self.xml.attrib.get('bssid').strip()
except AttributeError:
bssid = None
if bssid:
bssid = _common.canonizeEUI(bssid)
self._cfg['BASE']['AP'] = bssid
crypto = self.xml.find('encryption')
if crypto:
crypto = _common.convertWifiCrypto(crypto, self.xml.attrib['essid'])
# if crypto['type'] in ('wpa', 'wpa2', 'wpa3'):
if crypto['type'] in ('wpa', 'wpa2'):
# TODO: WPA2 enterprise
self._cfg['BASE']['Security'] = 'wpa'
# if crypto['type'] in ('wep', 'wpa', 'wpa2', 'wpa3'):
if crypto['type'] in ('wpa', 'wpa2'):
self._cfg['BASE']['Key'] = crypto['auth']['psk']
return(None)

View File

@ -0,0 +1,19 @@
# Generated by AIF-NG.
{%- for section_name, section_items in cfg.items() %}
{%- if section_items|isList %}
{#- We *only* use lists-of-dicts because they should always render to their own sections.
INI doesn't support nesting, thankfully. #}
{%- for i in section_items %}
[{{ section_name }}]
{%- for k, v in i.items() %}
{{ k }}={{ v }}
{%- endfor %}
{% endfor %}
{%- else %}
{#- It's a single-level dict. #}
[{{ section_name }}]
{%- for k, v in section_items.items() %}
{{ k }}={{ v }}
{%- endfor %}
{%- endif %}
{% endfor %}

184
aif/network/networkd.py Normal file
View File

@ -0,0 +1,184 @@
import logging
import os
##
# We have to use Jinja2 because while there are ways to *parse* an INI with duplicate keys
# (https://stackoverflow.com/a/38286559/733214), there's no way to *write* an INI with them using configparser.
# So we use Jinja2 logic.
import jinja2
##
import aif.utils
from . import _common


_logger = logging.getLogger(__name__)


class Connection(_common.BaseConnection):
def __init__(self, iface_xml):
super().__init__(iface_xml)
self.provider_type = 'systemd-networkd'
self.packages = set()
self.services = {
('/usr/lib/systemd/system/systemd-networkd.service'): ('etc/systemd/system/'
'multi-user.target.wants/'
'systemd-networkd.service'),
('/usr/lib/systemd/system/systemd-networkd.service'): ('etc/systemd/system/'
'dbus-org.freedesktop.network1.service'),
('/usr/lib/systemd/system/systemd-networkd.socket'): ('etc/systemd/system/'
'sockets.target.wants/systemd-networkd.socket'),
('/usr/lib/systemd/system/systemd-networkd.socket'): ('etc/systemd/system/'
'network-online.target.wants/'
'systemd-networkd-wait-online.service'),
# We include these *even if* self.auto['resolvers'][*] are false.
('/usr/lib/systemd/system/systemd-resolved.service'): ('etc/systemd/system/'
'dbus-org.freedesktop.resolve1.service'),
('/usr/lib/systemd/system/systemd-resolved.service'): ('etc/systemd/'
'system/multi-user.target.wants/'
'systemd-resolved.service')}
self._wpasupp = {}
self._initJ2()

def _initCfg(self):
_logger.info('Building config.')
if self.device == 'auto':
self.device = _common.getDefIface(self.connection_type)
self._cfg = {'Match': {'Name': self.device},
'Network': {'Description': ('A {0} profile for {1} '
'(generated by AIF-NG)').format(self.connection_type,
self.device),
'DefaultRouteOnDevice': ('true' if self.is_defroute else 'false'),
# This (may) get modified by logic below.
'IPv6AcceptRA': 'false',
'LinkLocalAddressing': 'no'}}
if self.domain:
self._cfg['Network']['Domains'] = self.domain
if self.resolvers:
self._cfg['Network']['DNS'] = [str(ip) for ip in self.resolvers]
if all((self.auto['addresses']['ipv4'], self.auto['addresses']['ipv6'])):
self._cfg['Network']['IPv6AcceptRA'] = 'true'
self._cfg['Network']['LinkLocalAddressing'] = 'ipv6'
self._cfg['Network']['DHCP'] = 'yes'
elif self.auto['addresses']['ipv4'] and not self.auto['addresses']['ipv6']:
self._cfg['Network']['DHCP'] = 'ipv4'
elif (not self.auto['addresses']['ipv4']) and self.auto['addresses']['ipv6']:
self._cfg['Network']['IPv6AcceptRA'] = 'true'
self._cfg['Network']['LinkLocalAddressing'] = 'ipv6'
self._cfg['Network']['DHCP'] = 'ipv6'
else:
self._cfg['Network']['DHCP'] = 'no'
if any((self.auto['addresses']['ipv4'], self.auto['routes']['ipv4'], self.auto['resolvers']['ipv4'])):
t = 'ipv4'
self._cfg['DHCPv4'] = {'UseDNS': ('true' if self.auto['resolvers'][t] else 'false'),
'UseRoutes': ('true' if self.auto['routes'][t] else 'false')}
if any((self.auto['addresses']['ipv6'], self.auto['routes']['ipv6'], self.auto['resolvers']['ipv6'])):
t = 'ipv6'
self._cfg['Network']['IPv6AcceptRA'] = 'true'
self._cfg['DHCPv6'] = {'UseDNS': ('true' if self.auto['resolvers'][t] else 'false')}
for t in ('ipv4', 'ipv6'):
if self.addrs[t]:
if t == 'ipv6':
self._cfg['Network']['LinkLocalAddressing'] = 'ipv6'
if 'Address' not in self._cfg.keys():
self._cfg['Address'] = []
for addr, net, gw in self.addrs[t]:
a = {'Address': '{0}/{1}'.format(str(addr), str(net.prefixlen))}
self._cfg['Address'].append(a)
if self.routes[t]:
if 'Route' not in self._cfg.keys():
self._cfg['Route'] = []
for route, net, gw in self.routes[t]:
r = {'Gateway': str(gw),
'Destination': '{0}/{1}'.format(str(route), str(net.prefixlen))}
self._cfg['Route'].append(r)
if self._cfg['Network']['IPv6AcceptRA'] == 'true':
self._cfg['Network']['LinkLocalAddressing'] = 'ipv6'
if 'IPv6AcceptRA' not in self._cfg.keys():
self._cfg['IPv6AcceptRA'] = {'UseDNS': ('true' if self.auto['resolvers']['ipv6'] else 'false')}
self._initConnCfg()
_logger.info('Config built successfully.')
return(None)

def _initJ2(self):
_logger.debug('Fetching template from networkd.conf.j2')
self.j2_env = jinja2.Environment(loader = jinja2.FileSystemLoader(searchpath = './'))
self.j2_env.filters.update(aif.utils.j2_filters)
self.j2_tpl = self.j2_env.get_template('networkd.conf.j2')
return(None)

def writeConf(self, chroot_base):
cfgroot = os.path.join(chroot_base, 'etc', 'systemd', 'network')
cfgfile = os.path.join(cfgroot, self.id)
os.makedirs(cfgroot, exist_ok = True)
os.chown(cfgroot, 0, 0)
os.chmod(cfgroot, 0o0755)
with open(cfgfile, 'w') as fh:
fh.write(self.j2_tpl.render(cfg = self._cfg))
os.chmod(cfgfile, 0o0644)
os.chown(cfgfile, 0, 0)
self._writeConnCfg(chroot_base)
_logger.info('Wrote: {0}'.format(cfgfile))
_logger.debug('Rendering variables: {0}'.format(self._cfg))
_logger.debug('Rendered template: {0}'.format(self.j2_tpl.render(cfg = self._cfg)))
return(None)


class Ethernet(Connection):
def __init__(self, iface_xml):
super().__init__(iface_xml)
self.connection_type = 'ethernet'
self._initCfg()


class Wireless(Connection):
def __init__(self, iface_xml):
super().__init__(iface_xml)
self.connection_type = 'wireless'
self.packages.add('wpa_supplicant')
self.services['src'] = 'dest'
self._initCfg()

def _initConnCfg(self):
self._wpasupp['ssid'] = '"{0}"'.format(self.xml.attrib['essid'])
hidden = aif.utils.xmlBool(self.xml.attrib.get('hidden', 'false'))
if hidden:
self._wpasupp['scan_ssid'] = 1
try:
bssid = self.xml.attrib.get('bssid').strip()
except AttributeError:
bssid = None
if bssid:
bssid = _common.canonizeEUI(bssid)
self._wpasupp['bssid'] = bssid
self._wpasupp['bssid_whitelist'] = bssid
crypto = self.xml.find('encryption')
if crypto:
crypto = _common.convertWifiCrypto(crypto, self._cfg['BASE']['ESSID'])
# if crypto['type'] in ('wpa', 'wpa2', 'wpa3'):
# TODO: WPA2 enterprise
if crypto['type'] in ('wpa', 'wpa2'):
self._wpasupp['psk'] = crypto['auth']['psk']
else:
self._wpasupp['key_mgmt'] = 'NONE'
_logger.debug('Fetching template from wpa_supplicant.conf.j2')
self.wpasupp_tpl = self.j2_env.get_template('wpa_supplicant.conf.j2')
self.services[('/usr/lib/systemd/system/wpa_supplicant@.service')] = ('etc/systemd/'
'system/'
'multi-user.target.wants/'
'wpa_supplicant@'
'{0}.service').format(self.device)
return(None)

def _writeConnCfg(self, chroot_base):
cfgroot = os.path.join(chroot_base, 'etc', 'wpa_supplicant')
cfgfile = os.path.join(cfgroot, 'wpa_supplicant-{0}.conf'.format(self.device))
os.makedirs(cfgroot, exist_ok = True)
os.chown(cfgroot, 0, 0)
os.chmod(cfgroot, 0o0755)
with open(cfgfile, 'w') as fh:
fh.write(self.wpasupp_tpl.render(wpa = self._wpasupp))
os.chown(cfgfile, 0, 0)
os.chmod(cfgfile, 0o0640)
_logger.info('Wrote: {0}'.format(cfgfile))
_logger.debug('Rendering variables: {0}'.format(self._wpasupp))
_logger.debug('Rendered template: {0}'.format(self.wpasupp_tpl.render(wpa = self._wpasupp)))
return(None)

View File

@ -0,0 +1,168 @@
import configparser
import datetime
import logging
import os
import uuid
##
from . import _common


_logger = logging.getLogger(__name__)


class Connection(_common.BaseConnection):
def __init__(self, iface_xml):
super().__init__(iface_xml)
self.provider_type = 'NetworkManager'
self.packages = set('networkmanager')
self.services = {
('/usr/lib/systemd/system/NetworkManager.service'): ('etc/systemd/system/'
'multi-user.target.wants/'
'NetworkManager.service'),
('/usr/lib/systemd/system/NetworkManager-dispatcher.service'): ('etc/systemd/system/'
'dbus-org.freedesktop.'
'nm-dispatcher.service'),
('/usr/lib/systemd/system/NetworkManager-wait-online.service'): ('etc/systemd/'
'system/'
'network-online.target.wants/'
'NetworkManager-wait-online.service')}
self.uuid = uuid.uuid4()

def _initCfg(self):
_logger.info('Building config.')
if self.device == 'auto':
self.device = _common.getDefIface(self.connection_type)
self._cfg = configparser.ConfigParser(allow_no_value = True, interpolation = None)
self._cfg.optionxform = str
self._cfg['connection'] = {'id': self.id,
'uuid': self.uuid,
'type': self.connection_type,
'interface-name': self.device,
'permissions': '',
'timestamp': datetime.datetime.utcnow().timestamp()}
# We *theoretically* could do this in _initAddrs() but we do it separately so we can trim out duplicates.
# TODO: rework this? we technically don't need to split in ipv4/ipv6 since ipaddress does that for us.
for addrtype, addrs in self.addrs.items():
self._cfg[addrtype] = {}
cidr_gws = {}
# Routing
if not self.is_defroute:
self._cfg[addrtype]['never-default'] = 'true'
if not self.auto['routes'][addrtype]:
self._cfg[addrtype]['ignore-auto-routes'] = 'true'
# DNS
self._cfg[addrtype]['dns-search'] = (self.domain if self.domain else '')
if not self.auto['resolvers'][addrtype]:
self._cfg[addrtype]['ignore-auto-dns'] = 'true'
# Address handling
if addrtype == 'ipv6':
self._cfg[addrtype]['addr-gen-mode'] = 'stable-privacy'
if not addrs and not self.auto['addresses'][addrtype]:
self._cfg[addrtype]['method'] = 'ignore'
elif self.auto['addresses'][addrtype]:
if addrtype == 'ipv4':
self._cfg[addrtype]['method'] = 'auto'
else:
self._cfg[addrtype]['method'] = ('auto' if self.auto['addresses'][addrtype] == 'slaac'
else 'dhcp6')
else:
self._cfg[addrtype]['method'] = 'manual'
for idx, (ip, cidr, gw) in enumerate(addrs):
if cidr not in cidr_gws.keys():
cidr_gws[cidr] = gw
new_cidr = True
else:
new_cidr = False
addrnum = idx + 1
addr_str = '{0}/{1}'.format(str(ip), str(cidr.prefixlen))
if new_cidr:
addr_str = '{0},{1}'.format(addr_str, str(gw))
self._cfg[addrtype]['address{0}'.format(addrnum)] = addr_str
# Resolvers
for resolver in self.resolvers:
if addrtype == 'ipv{0}'.format(resolver.version):
if 'dns' not in self._cfg[addrtype]:
self._cfg[addrtype]['dns'] = []
self._cfg[addrtype]['dns'].append(str(resolver))
if 'dns' in self._cfg[addrtype].keys():
self._cfg[addrtype]['dns'] = '{0};'.format(';'.join(self._cfg[addrtype]['dns']))
# Routes
for idx, (dest, net, gw) in self.routes[addrtype]:
routenum = idx + 1
self._cfg[addrtype]['route{0}'.format(routenum)] = '{0}/{1},{2}'.format(str(dest),
str(net.prefixlen),
str(gw))
self._initConnCfg()
_logger.info('Config built successfully.')
# TODO: does this render correctly?
# This is only for debug logging.
_logout = {}
for s in self._cfg.sections():
_logout[s] = dict(self._cfg[s])
_logger.debug('Config: {0}'.format(_logout))
return(None)

def writeConf(self, chroot_base):
cfgroot = os.path.join(chroot_base, 'etc', 'NetworkManager')
cfgdir = os.path.join(cfgroot, 'system-connections')
cfgpath = os.path.join(cfgdir, '{0}.nmconnection'.format(self.id))
os.makedirs(cfgdir, exist_ok = True)
with open(cfgpath, 'w') as fh:
self._cfg.write(fh, space_around_delimiters = False)
for root, dirs, files in os.walk(cfgroot):
os.chown(root, 0, 0)
for d in dirs:
dpath = os.path.join(root, d)
os.chown(dpath, 0, 0)
for f in files:
fpath = os.path.join(root, f)
os.chown(fpath, 0, 0)
os.chmod(cfgroot, 0o0755)
os.chmod(cfgdir, 0o0700)
os.chmod(cfgpath, 0o0600)
_logger.info('Wrote: {0}'.format(cfgpath))
return(None)


class Ethernet(Connection):
def __init__(self, iface_xml):
super().__init__(iface_xml)
self.connection_type = 'ethernet'
self._initCfg()

def _initConnCfg(self):
self._cfg[self.connection_type] = {'mac-address-blacklist': ''}
return(None)


class Wireless(Connection):
def __init__(self, iface_xml):
super().__init__(iface_xml)
self.connection_type = 'wireless'
self._initCfg()

def _initConnCfg(self):
self._cfg['wifi'] = {'mac-address-blacklist': '',
'mode': 'infrastructure',
'ssid': self.xml.attrib['essid']}
try:
bssid = self.xml.attrib.get('bssid').strip()
except AttributeError:
bssid = None
if bssid:
bssid = _common.canonizeEUI(bssid)
self._cfg['wifi']['bssid'] = bssid
self._cfg['wifi']['seen-bssids'] = '{0};'.format(bssid)
crypto = self.xml.find('encryption')
if crypto:
self.packages.add('wpa_supplicant')
self._cfg['wifi-security'] = {}
crypto = _common.convertWifiCrypto(crypto, self._cfg['wifi']['ssid'])
# if crypto['type'] in ('wpa', 'wpa2', 'wpa3'):
if crypto['type'] in ('wpa', 'wpa2'):
# TODO: WPA2 enterprise
self._cfg['wifi-security']['key-mgmt'] = 'wpa-psk'
# if crypto['type'] in ('wep', 'wpa', 'wpa2', 'wpa3'):
if crypto['type'] in ('wpa', 'wpa2'):
self._cfg['wifi-security']['psk'] = crypto['auth']['psk']
return(None)

View File

@ -0,0 +1,9 @@
# Generated by AIF-NG.
ctrl_interface=/run/wpa_supplicant
update_config=1

network={
{%- for k, v in wpa.items() %}
{{ k }}={{ v }}
{%- endfor %}
}

2
aif/prep.py Normal file
View File

@ -0,0 +1,2 @@
import os
import aif.utils.file_handler

4
aif/software/__init__.py Normal file
View File

@ -0,0 +1,4 @@
from . import config
from . import keyring
from . import objtypes
from . import pacman

124
aif/software/config.py Normal file
View File

@ -0,0 +1,124 @@
import copy
import logging
import os
import re
import shutil
from collections import OrderedDict
##
import jinja2
##
import aif.utils


_logger = logging.getLogger(__name__)


class PacmanConfig(object):
_sct_re = re.compile(r'^\s*\[(?P<sect>[^]]+)\]\s*$')
_kv_re = re.compile(r'^\s*(?P<key>[^\s=[]+)((?:\s*=\s*)(?P<value>.*))?$')
_skipline_re = re.compile(r'^\s*(#.*)?$')
# TODO: Append mirrors/repos to pacman.conf here before we parse?
# I copy a log of logic from pycman/config.py here.
_list_keys = ('CacheDir', 'HookDir', 'HoldPkg', 'SyncFirst', 'IgnoreGroup', 'IgnorePkg', 'NoExtract', 'NoUpgrade',
'Server')
_single_keys = ('RootDir', 'DBPath', 'GPGDir', 'LogFile', 'Architecture', 'XferCommand', 'CleanMethod', 'SigLevel',
'LocalFileSigLevel', 'RemoteFileSigLevel')
_noval_keys = ('UseSyslog', 'ShowSize', 'TotalDownload', 'CheckSpace', 'VerbosePkgLists', 'ILoveCandy', 'Color',
'DisableDownloadTimeout')
# These are the default (commented-out) values in the stock /etc/pacman.conf as of January 5, 2020.
defaults = OrderedDict({'options': {'Architecture': 'auto',
'CacheDir': '/var/cache/pacman/pkg/',
'CheckSpace': None,
'CleanMethod': 'KeepInstalled',
# 'Color': None,
'DBPath': '/var/lib/pacman/',
'GPGDir': '/etc/pacman.d/gnupg/',
'HoldPkg': 'pacman glibc',
'HookDir': '/etc/pacman.d/hooks/',
'IgnoreGroup': [],
'IgnorePkg': [],
'LocalFileSigLevel': ['Optional'],
'LogFile': '/var/log/pacman.log',
'NoExtract': [],
'NoUpgrade': [],
'RemoteFileSigLevel': ['Required'],
'RootDir': '/',
'SigLevel': ['Required', 'DatabaseOptional'],
# 'TotalDownload': None,
# 'UseSyslog': None,
# 'VerbosePkgLists': None,
'XferCommand': '/usr/bin/curl -L -C - -f -o %o %u'},
# These should be explicitly included in the AIF config.
# 'core': {'Include': '/etc/pacman.d/mirrorlist'},
# 'extra': {'Include': '/etc/pacman.d/mirrorlist'},
# 'community': {'Include': '/etc/pacman.d/mirrorlist'}
})

def __init__(self, chroot_base, confpath = '/etc/pacman.conf'):
self.chroot_base = chroot_base
self.confpath = os.path.join(self.chroot_base, re.sub(r'^/+', '', confpath))
self.confbak = '{0}.bak'.format(self.confpath)
self.mirrorlstpath = os.path.join(self.chroot_base, 'etc', 'pacman.d', 'mirrorlist')
self.mirrorlstbak = '{0}.bak'.format(self.mirrorlstpath)
if not os.path.isfile(self.confbak):
shutil.copy2(self.confpath, self.confbak)
_logger.info('Copied: {0} => {1}'.format(self.confpath, self.confbak))
if not os.path.isfile(self.mirrorlstbak):
shutil.copy2(self.mirrorlstpath, self.mirrorlstbak)
_logger.info('Copied: {0} => {1}'.format(self.mirrorlstpath, self.mirrorlstbak))
self.j2_env = jinja2.Environment(loader = jinja2.FileSystemLoader(searchpath = './'))
self.j2_env.filters.update(aif.utils.j2_filters)
self.j2_conf = self.j2_env.get_template('pacman.conf.j2')
self.j2_mirror = self.j2_env.get_template('mirrorlist.j2')
self.conf = None
self.mirrors = []

def _includeExpander(self, lines):
curlines = []
for line in lines:
r = self._kv_re.search(line)
if r and (r.group('key') == 'Include') and r.group('value'):
path = os.path.join(self.chroot_base, re.sub(r'^/?', '', r.group('path')))
with open(path, 'r') as fh:
curlines.extend(self._includeExpander(fh.read().splitlines()))
else:
curlines.append(line)
return(curlines)

def parse(self, defaults = True):
self.conf = OrderedDict()
rawlines = {}
with open(self.confpath, 'r') as fh:
rawlines['orig'] = [line for line in fh.read().splitlines() if not self._skipline_re.search(line)]
rawlines['parsed'] = self._includeExpander(rawlines['orig'])
for conftype, cfg in rawlines.items():
_confdict = copy.deepcopy(self.defaults)
_sect = None
for line in cfg:
if self._sct_re.search(line):
_sect = self._sct_re.search(line).group('sect')
if _sect not in _confdict.keys():
_confdict[_sect] = OrderedDict()
elif self._kv_re.search(line):
r = self._kv_re.search(line)
k = r.group('key')
v = r.group('value')
if k in self._noval_keys:
_confdict[_sect][k] = None
elif k in self._single_keys:
_confdict[_sect][k] = v
elif k in self._list_keys:
if k not in _confdict[_sect].keys():
_confdict[_sect][k] = []
_confdict[_sect][k].append(v)
if _confdict['options']['Architecture'] == 'auto':
_confdict['options']['Architecture'] = os.uname().machine
self.conf[conftype] = copy.deepcopy(_confdict)
return(None)

def writeConf(self):
with open(self.confpath, 'w') as fh:
fh.write(self.j2_conf.render(cfg = self.conf))
with open(self.mirrorlstpath, 'w') as fh:
fh.write(self.j2_mirror.render(mirrors = self.mirrors))
return(None)

231
aif/software/keyring.py Normal file
View File

@ -0,0 +1,231 @@
import csv
import logging
import os
import re
import sqlite3
##
import gpg


# We don't use utils.gpg_handler because this is pretty much all procedural.
# Though, maybe add e.g. TofuDB stuff to it, and subclass it here?
# TODO.

_logger = logging.getLogger(__name__)


_createTofuDB = """BEGIN TRANSACTION;
CREATE TABLE IF NOT EXISTS "ultimately_trusted_keys" (
"keyid" TEXT
);
CREATE TABLE IF NOT EXISTS "encryptions" (
"binding" INTEGER NOT NULL,
"time" INTEGER
);
CREATE TABLE IF NOT EXISTS "signatures" (
"binding" INTEGER NOT NULL,
"sig_digest" TEXT,
"origin" TEXT,
"sig_time" INTEGER,
"time" INTEGER,
PRIMARY KEY("binding","sig_digest","origin")
);
CREATE TABLE IF NOT EXISTS "bindings" (
"oid" INTEGER PRIMARY KEY AUTOINCREMENT,
"fingerprint" TEXT,
"email" TEXT,
"user_id" TEXT,
"time" INTEGER,
"policy" INTEGER CHECK(policy in (1,2,3,4,5)),
"conflict" STRING,
"effective_policy" INTEGER DEFAULT 0 CHECK(effective_policy in (0,1,2,3,4,5)),
UNIQUE("fingerprint","email")
);
CREATE TABLE IF NOT EXISTS "version" (
"version" INTEGER
);
INSERT INTO "version" ("version") VALUES (1);
CREATE INDEX IF NOT EXISTS "encryptions_binding" ON "encryptions" (
"binding"
);
CREATE INDEX IF NOT EXISTS "bindings_email" ON "bindings" (
"email"
);
CREATE INDEX IF NOT EXISTS "bindings_fingerprint_email" ON "bindings" (
"fingerprint",
"email"
);
COMMIT;"""


class KeyEditor(object):
def __init__(self, trustlevel = 4):
self.trusted = False
self.revoked = False
self.trustlevel = trustlevel
_logger.info('Key editor instantiated.')

def revoker(self, kw, arg, *args, **kwargs):
# The "save" commands here can also be "quit".
_logger.debug('Key revoker invoked:')
_logger.debug('Command: {0}'.format(kw))
_logger.debug('Argument: {0}'.format(arg))
if args:
_logger.debug('args: {0}'.format(','.join(args)))
if kwargs:
_logger.debug('kwargs: {0}'.format(kwargs))
if kw == 'GET_LINE':
if arg == 'keyedit.prompt':
if not self.revoked:
_logger.debug('Returning: "disable"')
self.revoked = True
return('disable')
else:
_logger.debug('Returning: "save"')
return('save')
else:
_logger.debug('Returning: "save"')
return('save')
return (None)

def truster(self, kw, arg, *args, **kwargs):
_logger.debug('Key trust editor invoked:')
_logger.debug('Command: {0}'.format(kw))
_logger.debug('Argument: {0}'.format(arg))
if args:
_logger.debug('args: {0}'.format(','.join(args)))
if kwargs:
_logger.debug('kwargs: {0}'.format(kwargs))
if kw == 'GET_LINE':
if arg == 'keyedit.prompt':
if not self.trusted:
_logger.debug('Returning: "trust"')
return('trust')
else:
_logger.debug('Returning: "save"')
return('save')
elif arg == 'edit_ownertrust.value' and not self.trusted:
self.trusted = True
_logger.debug('Status changed to trusted')
_logger.debug('Returning: "{0}"'.format(self.trustlevel))
return(str(self.trustlevel))
else:
_logger.debug('Returning: "save"')
return('save')
return(None)


class PacmanKey(object):
def __init__(self, chroot_base):
# We more or less recreate /usr/bin/pacman-key in python.
self.chroot_base = chroot_base
self.home = os.path.join(self.chroot_base, 'etc', 'pacman.d', 'gnupg')
self.conf = os.path.join(self.home, 'gpg.conf')
self.agent_conf = os.path.join(self.home, 'gpg-agent.conf')
self.db = os.path.join(self.home, 'tofu.db')
# ...pacman devs, why do you create the gnupg home with 0755?
os.makedirs(self.home, 0o0755, exist_ok = True)
# Probably not necessary, but...
with open(os.path.join(self.home, '.gpg-v21-migrated'), 'wb') as fh:
fh.write(b'')
_logger.info('Touched/wrote: {0}'.format(os.path.join(self.home, '.gpg-v21-migrated')))
if not os.path.isfile(self.conf):
with open(self.conf, 'w') as fh:
fh.write(('# Generated by AIF-NG.\n'
'no-greeting\n'
'no-permission-warning\n'
'lock-never\n'
'keyserver-options timeout=10\n'))
_logger.info('Wrote: {0}'.format(self.conf))
if not os.path.isfile(self.agent_conf):
with open(self.agent_conf, 'w') as fh:
fh.write(('# Generated by AIF-NG.\n'
'disable-scdaemon\n'))
_logger.info('Wrote: {0}'.format(self.agent_conf))
self.key = None
# ...PROBABLY order-specific.
self._initTofuDB()
self.gpg = gpg.Context(home_dir = self.home)
self._initKey()
self._initPerms()
self._initKeyring()

def _initKey(self):
# These match what is currently used by pacman-key --init.
_keyinfo = {'userid': 'Pacman Keyring Master Key <pacman@localhost>',
'algorithm': 'rsa2048',
'expires_in': 0,
'expires': False,
'sign': True,
'encrypt': False,
'certify': False,
'authenticate': False,
'passphrase': None,
'force': False}
_logger.debug('Creating key with options: {0}'.format(_keyinfo))
genkey = self.gpg.create_key(**_keyinfo)
_logger.info('Created key: {0}'.format(genkey.fpr))
self.key = self.gpg.get_key(genkey.fpr, secret = True)
self.gpg.signers = [self.key]
_logger.debug('Set signer/self key to: {0}'.format(self.key))

def _initKeyring(self):
krdir = os.path.join(self.chroot_base, 'usr', 'share', 'pacman', 'keyrings')
keyrings = [i for i in os.listdir(krdir) if i.endswith('.gpg')]
_logger.info('Importing {0} keyring(s).'.format(len(keyrings)))
for idx, kr in enumerate(keyrings):
krname = re.sub(r'\.gpg$', '', kr)
krfile = os.path.join(krdir, kr)
trustfile = os.path.join(krdir, '{0}-trusted'.format(krname))
revokefile = os.path.join(krdir, '{0}-revoked'.format(krname))
_logger.debug('Importing keyring: {0} ({1}/{2})'.format(krname, (idx + 1), len(keyrings)))
with open(os.path.join(krdir, kr), 'rb') as fh:
imported_keys = self.gpg.key_import(fh.read())
if imported_keys:
_logger.debug('Imported: {0}'.format(imported_keys))
# We also have to sign/trust the keys. I still can't believe there isn't an easier way to do this.
if os.path.isfile(trustfile):
with open(trustfile, 'r') as fh:
for trust in csv.reader(fh, delimiter = ':'):
k_id = trust[0]
k_trust = int(trust[1])
k = self.gpg.get_key(k_id)
self.gpg.key_sign(k, local = True)
editor = KeyEditor(trustlevel = k_trust)
self.gpg.interact(k, editor.truster)
# And revoke keys.
if os.path.isfile(revokefile):
with open(revokefile, 'r') as fh:
for fpr in fh.read().splitlines():
k = self.gpg.get_key(fpr)
editor = KeyEditor()
self.gpg.interact(k, editor.revoker)
return(None)

def _initPerms(self):
# Again, not quite sure why it's so permissive. But pacman-key explicitly does it, so.
filenames = {'pubring': 0o0644,
'trustdb': 0o0644,
'secring': 0o0600}
for fname, filemode in filenames.items():
fpath = os.path.join(self.home, '{0}.gpg'.format(fname))
if not os.path.isfile(fpath):
# TODO: Can we just manually create an empty file, or will GPG not like that?
# I'm fairly certain that the key creation automatically creates these files, so as long as this
# function is run after _initKey() then we should be fine.
# with open(fpath, 'wb') as fh:
# fh.write(b'')
# _logger.info('Wrote: {0}'.format(fpath))
continue
os.chmod(fpath, filemode)
return(None)

def _initTofuDB(self):
# As glad as I am that GnuPG is moving more towards more accessible data structures...
db = sqlite3.connect(self.db)
cur = db.cursor()
cur.executescript(_createTofuDB)
db.commit()
cur.close()
db.close()
return(None)

View File

@ -0,0 +1,5 @@
# Generated by AIF-NG.
# See /etc/pacman.d/mirrorlist.bak for original version.
{%- for mirror in mirrors %}
Server = {{ mirror }}
{%- endfor %}

72
aif/software/objtypes.py Normal file
View File

@ -0,0 +1,72 @@
import logging
import os
import re
##
from lxml import etree


_logger = logging.getLogger(__name__)


class Mirror(object):
def __init__(self, mirror_xml, repo = None, arch = None):
self.xml = mirror_xml
_logger.debug('mirror_xml: {0}'.format(etree.tostring(self.xml, with_tail = False).decode('utf-8')))
self.uri = self.xml.text
self.real_uri = None
self.aif_uri = None

def parse(self, chroot_base, repo, arch):
self.real_uri = self.uri.replace('$repo', repo).replace('$arch', arch)
if self.uri.startswith('file://'):
self.aif_uri = os.path.join(chroot_base, re.sub(r'^file:///?', ''))


class Package(object):
def __init__(self, package_xml):
self.xml = package_xml
_logger.debug('package_xml: {0}'.format(etree.tostring(self.xml, with_tail = False).decode('utf-8')))
self.name = self.xml.text
self.repo = self.xml.attrib.get('repo')
if self.repo:
self.qualified_name = '{0}/{1}'.format(self.repo, self.name)
else:
self.qualified_name = self.name


class Repo(object):
def __init__(self, chroot_base, repo_xml, arch = 'x86_64'):
# TODO: support Usage? ("REPOSITORY SECTIONS", pacman.conf(5))
self.xml = repo_xml
_logger.debug('repo_xml: {0}'.format(etree.tostring(self.xml, with_tail = False).decode('utf-8')))
# TODO: SigLevels?!
self.name = self.xml.attrib['name']
self.conflines = {}
self.mirrors = []
self.parsed_mirrors = []
_mirrors = self.xml.xpath('mirror|include') # "Server" and "Include" respectively in pyalpm lingo.
if _mirrors:
for m in _mirrors:
k = m.tag.title()
if k == 'Mirror':
k = 'Server'
if k not in self.conflines.keys():
self.conflines[k] = []
self.conflines[k].append(m.text)
# TODO; better parsing here. handle in config.py?
# if m.tag == 'include':
# # TODO: We only support one level of includes. Pacman supports unlimited nesting? of includes.
# file_uri = os.path.join(chroot_base, re.sub(r'^/?', '', m.text))
# if not os.path.isfile(file_uri):
# _logger.error('Include file ({0}) does not exist: {1}'.format(m.text, file_uri))
# raise FileNotFoundError('Include file does not exist')
# with open(file_uri, 'r') as fh:
# for line in fh.read().splitlines():
else:
# Default (mirrorlist)
self.conflines['Include'] = ['file:///etc/pacman.d/mirrorlist']
self.enabled = (True if self.xml.attrib.get('enabled', 'true') in ('1', 'true') else False)
self.siglevel = self.xml.attrib.get('sigLevel')
# self.real_uri = None
# if self.uri:
# self.real_uri = self.uri.replace('$repo', self.name).replace('$arch', arch)

View File

@ -0,0 +1,16 @@
# Generated by AIF-NG.
# See /etc/pacman.conf.bak for original version.
{%- for section, kv in cfg.items() %}
[{{ section }}]
{%- for key, value in kv.items() %}
{%- if value is none %}
{{ key }}
{%- elif value|isList %}
{%- for val in value %}
{{ key }} = {{ val }}
{%- endfor %}
{%- else %}
{{ key }} = {{ val }}
{%- endif %}
{%- endfor %}
{% endfor %}

123
aif/software/pacman.py Normal file
View File

@ -0,0 +1,123 @@
# We can manually bootstrap and alter pacman's keyring. But check the bootstrap tarball; we might not need to.
# TODO.

import logging
import os
import re
##
import pyalpm
from lxml import etree
##
from . import keyring
from . import objtypes

_logger = logging.getLogger(__name__)


# TODO: There is some duplication here that we can get rid of in the future. Namely:
# - Mirror URI parsing
# - Unified function for parsing Includes
# - At some point, ideally there should be a MirrorList class that can take (or generate?) a list of Mirrors
# and have a write function to write out a mirror list to a specified location.


class PackageManager(object):
def __init__(self, chroot_base, pacman_xml):
self.xml = pacman_xml
_logger.debug('pacman_xml: {0}'.format(etree.tostring(self.xml, with_tail = False).decode('utf-8')))
self.chroot_base = chroot_base
self.pacman_dir = os.path.join(self.chroot_base, 'var', 'lib', 'pacman')
self.configfile = os.path.join(self.chroot_base, 'etc', 'pacman.conf')
self.keyring = keyring.PacmanKey(self.chroot_base)
self.config = None
self.handler = None
self.repos = []
self.packages = []
self.mirrorlist = []
self._initHandler()
self._initMirrors()
self._initRepos()

def _initHandler(self):
# TODO: Append mirrors/repos to pacman.conf here before we parse?
self.opts = {'Architecture': 'x86_64', # Technically, "auto" but Arch proper only supports x86_64.
'CacheDir': '/var/cache/pacman/pkg/',
'CheckSpace': True,
'CleanMethod': 'KeepInstalled',
# 'Color': None,
'DBPath': '/var/lib/pacman/',
'GPGDir': '/etc/pacman.d/gnupg/',
'HoldPkg': 'pacman glibc',
'HookDir': '/etc/pacman.d/hooks/',
'IgnoreGroup': '',
'IgnorePkg': '',
'LocalFileSigLevel': 'Optional',
'LogFile': '/var/log/pacman.log',
'NoExtract': '',
'NoUpgrade': '',
'RemoteFileSigLevel': 'Required',
'RootDir': '/',
'SigLevel': 'Required DatabaseOptional',
# 'TotalDownload': None,
# 'UseSyslog': None,
# 'VerbosePkgLists': None,
'XferCommand': '/usr/bin/curl -L -C - -f -o %o %u'
}
for k, v in self.opts.items():
if k in ('CacheDir', 'DBPath', 'GPGDir', 'HookDir', 'LogFile', 'RootDir'):
v = re.sub(r'^/+', r'', v)
self.opts[k] = os.path.join(self.chroot_base, v)
if k in ('HoldPkg', 'IgnoreGroup', 'IgnorePkg', 'NoExtract', 'NoUpgrade', 'SigLevel'):
v = v.split()
if not self.handler:
self.handler = pyalpm.Handle(self.chroot_base, self.pacman_dir)
# Pretty much blatantly ripped this off of pycman:
# https://github.com/archlinux/pyalpm/blob/master/pycman/config.py
for k in ('LogFile', 'GPGDir', 'NoExtract', 'NoUpgrade'):
setattr(self.handler, k.lower(), self.opts[k])
self.handler.arch = self.opts['Architecture']
if self.opts['IgnoreGroup']:
self.handler.ignoregrps = self.opts['IgnoreGroup']
if self.opts['IgnorePkg']:
self.handler.ignorepkgs = self.opts['IgnorePkg']
return(None)

def _initMirrors(self):
mirrors = self.xml.find('mirrorList')
if mirrors:
_mirrorlist = os.path.join(self.chroot_base, 'etc', 'pacman.d', 'mirrorlist')
with open(_mirrorlist, 'a') as fh:
fh.write('\n# Added by AIF-NG.\n')
for m in mirrors.findall('mirror'):
mirror = objtypes.Mirror(m)
self.mirrorlist.append(mirror)
fh.write('Server = {0}\n'.format(mirror.uri))
_logger.info('Appended: {0}'.format(_mirrorlist))
return(None)

def _initRepos(self):
repos = self.xml.find('repos')
_conf = os.path.join(self.chroot_base, 'etc', 'pacman.conf')
with open(_conf, 'a') as fh:
fh.write('\n# Added by AIF-NG.\n')
for r in repos.findall('repo'):
repo = objtypes.Repo(self.chroot_base, r)
if repo.enabled:
fh.write('[{0}]\n'.format(repo.name))
if repo.siglevel:
fh.write('SigLevel = {0}\n'.format(repo.siglevel))
if repo.uri:
fh.write('Server = {0}\n'.format(repo.uri))
else:
fh.write('Include = /etc/pacman.d/mirrorlist\n')
else:
fh.write('#[{0}]\n'.format(repo.name))
if repo.siglevel:
fh.write('#SigLevel = {0}\n'.format(repo.siglevel))
if repo.uri:
fh.write('#Server = {0}\n'.format(repo.uri))
else:
fh.write('#Include = /etc/pacman.d/mirrorlist\n')
self.repos.append(repo)
_logger.info('Appended: {0}'.format(_conf))
return(None)

31
aif/system/__init__.py Normal file
View File

@ -0,0 +1,31 @@
import logging
##
from lxml import etree
##
from . import locales
from . import console
from . import users
from . import services


_logger = logging.getLogger(__name__)


class Sys(object):
def __init__(self, chroot_base, system_xml):
self.xml = system_xml
_logger.debug('system_xml: {0}'.format(etree.tostring(self.xml, with_tail = False).decode('utf-8')))
self.chroot_base = chroot_base
self.locale = locales.Locale(self.chroot_base, self.xml.find('locales'))
self.console = console.Console(self.chroot_base, self.xml.find('console'))
self.tz = locales.Timezone(self.chroot_base, self.xml.attrib.get('timezone', 'UTC'))
self.user = users.UserDB(self.chroot_base, self.xml.find('rootPassword'), self.xml.find('users'))
self.services = services.ServiceDB(self.chroot_base, self.xml.find('services'))

def apply(self):
self.locale.writeConf()
self.console.writeConf()
self.tz.apply()
self.user.writeConf()
self.services.apply()
return(None)

112
aif/system/console.py Normal file
View File

@ -0,0 +1,112 @@
import configparser
import io
import logging
import os
import pathlib
import re


_logger = logging.getLogger(__name__)


_font_re = re.compile(r'(\.(psfu?|fnt))?(\.gz)?$', re.IGNORECASE)
_kbd_re = re.compile(r'(\.map)?(\.gz)?$')


class Console(object):
def __init__(self, chroot_base, console_xml):
self.xml = console_xml
self.chroot_base = chroot_base
self._cfg = configparser.ConfigParser(allow_no_value = True, interpolation = None)
self._cfg.optionxform = str
self.keyboard = Keyboard(self.xml.find('keyboard'))
self.font = Font(self.xml.find('text'))
self._cfg['BASE'] = {}
for i in (self.keyboard, self.font):
self._cfg['BASE'].update(i.settings)

def writeConf(self):
for x in (self.font, self.keyboard):
x.verify()
cfg = os.path.join(self.chroot_base, 'etc', 'vconsole.conf')
# We have to strip out the section from the ini.
cfgbuf = io.StringIO()
self._cfg.write(cfgbuf, space_around_delimiters = False)
cfgbuf.seek(0, 0)
with open(cfg, 'w') as fh:
for line in cfgbuf.readlines():
if line.startswith('[BASE]') or line.strip() == '':
continue
fh.write(line)
os.chmod(cfg, 0o0644)
os.chown(cfg, 0, 0)
_logger.info('Wrote: {0}'.format(cfg))
return(None)


class Font(object):
def __init__(self, font_xml):
self.xml = font_xml
self.settings = {}
if self.xml:
chk = {'FONT': self.xml.find('font'),
'FONT_MAP': self.xml.find('map'),
'FONT_UNIMAP': self.xml.find('unicodeMap')}
for setting, xml in chk.items():
if xml:
self.settings[setting] = xml.text.strip()
_logger.debug('Rendered settings: {0}'.format(self.settings))

def verify(self, chroot_base = '/'):
if 'FONT' not in self.settings.keys():
_logger.warning('Attempted to verify settings with no chosen font.')
return(None)
fontdir = pathlib.Path(chroot_base).joinpath('usr', 'share', 'kbd', 'consolefonts')
fontnames = [_font_re.sub('', p.stem) for p in fontdir.iterdir() if not p.stem.startswith(('README.',
'partialfonts',
'ERRORS'))]
_logger.debug('Rendered list of supported console fonts on target system: {0}'.format(','.join(fontnames)))
if self.settings['FONT'] not in fontnames:
_logger.error('Console font {0} not installed on target system.'.format(self.settings['FONT']))
raise ValueError('Specified console font not available on target system')
return(True)


class Keyboard(object):
def __init__(self, chroot_base, keyboard_xml):
self.xml = keyboard_xml
self.chroot_base = chroot_base
self.settings = {}
if self.xml:
chk = {'KEYMAP': self.xml.find('map'),
'KEYMAP_TOGGLE': self.xml.find('toggle')}
for setting, xml in chk.items():
if xml:
self.settings[setting] = xml.text.strip()
_logger.debug('Rendered settings: {0}'.format(self.settings))

def verify(self):
kbdnames = []
for i in ('KEYMAP', 'KEYMAP_TOGGLE'):
if i in self.settings.keys():
kbdnames.append(self.settings[i])
if not kbdnames:
_logger.warning('Attempted to verify settings with no chosen keyboard map(s).')
return(None)
keymapdir = os.path.join(self.chroot_base, 'usr', 'share', 'kbd', 'keymaps')
kbdmaps = []
for root, dirs, files in os.walk(keymapdir, topdown = True):
if root.endswith('/include'):
dirs[:] = []
files[:] = []
continue
for f in files:
if f.endswith('.inc'):
continue
kbdmaps.append(_kbd_re.sub('', f))
_logger.debug('Rendered list of supported keyboard maps on target system: {0}'.format(','.join(kbdmaps)))
for k in kbdnames:
if k not in kbdmaps:
_logger.error('Keyboard map {0} not installed on target system.'.format(k))
raise ValueError('Specified keyboard map not available on target system')
return(True)

157
aif/system/locales.py Normal file
View File

@ -0,0 +1,157 @@
import configparser
import copy
import io
import logging
import os
import re
import shutil
import subprocess


_logger = logging.getLogger(__name__)


# TODO: time
_locale_re = re.compile(r'^(?!#\s|)$')
_locale_def_re = re.compile(r'([^.]*)[^@]*(.*)')


class Locale(object):
def __init__(self, chroot_base, locales_xml):
self.xml = locales_xml
self.chroot_base = chroot_base
self.syslocales = {}
self.userlocales = []
self.rawlocales = None
self._localevars = configparser.ConfigParser(allow_no_value = True, interpolation = None)
self._localevars.optionxform = str
self._localevars['BASE'] = {}
self._initVars()

def _initVars(self):
for l in self.xml.findall('locale'):
locale = l.text.strip()
self._localevars['BASE'][l.attrib['name'].strip()] = locale
if locale not in self.userlocales:
self.userlocales.append(locale)
if not self.userlocales:
self.userlocales = ['en_US', 'en_US.UTF-8']
_logger.debug('Rendered locales: {0}'.format(dict(self._localevars['BASE'])))
_logger.debug('Rendered user locales: {0}'.format(','.join(self.userlocales)))
return(None)

def _verify(self):
localegen = os.path.join(self.chroot_base, 'etc', 'locale.gen') # This *should* be brand new.
with open(localegen, 'r') as fh:
self.rawlocales = fh.read().splitlines()
for idx, line in enumerate(self.rawlocales[:]):
if _locale_re.search(line) or line.strip() == '':
continue
locale, charset = line.split()
locale = locale.replace('#', '')
self.syslocales[locale] = charset
if locale in self.userlocales:
# "Uncomment" the locale (self.writeConf() actually writes the change)
self.rawlocales[idx] = '{0} {1}'.format(locale, charset)
_logger.debug('Rendered system locales: {0}'.format(self.syslocales))
userl = set(self.userlocales)
sysl = set(self.syslocales.keys())
missing_locales = (userl - sysl)
if (userl - sysl):
_logger.error('Specified locale(s) {0} that does not exist on the target system.'.format(missing_locales))
raise ValueError('Missing locale(s)')
return(None)

def writeConf(self):
# We basically recreate locale-gen in python here, more or less.
self._verify()
localegen = os.path.join(self.chroot_base, 'etc', 'locale.gen')
localedbdir = os.path.join(self.chroot_base, 'usr', 'lib', 'locale')
localesrcdir = os.path.join(self.chroot_base, 'usr', 'share', 'i18n')
with open(localegen, 'w') as fh:
fh.write('# Generated by AIF-NG.\n\n')
fh.write('\n'.join(self.rawlocales))
fh.write('\n')
_logger.info('Wrote: {0}'.format(localegen))
# If only the locale DB wasn't in a hopelessly binary format.
# These destinations are built by the below subprocess call.
for root, dirs, files in os.walk(localedbdir):
for f in files:
fpath = os.path.join(root, f)
os.remove(fpath)
for d in dirs:
dpath = os.path.join(root, d)
shutil.rmtree(dpath)
_logger.debug('Pruned locale destination.')
for locale in self.userlocales:
lpath = os.path.join(localesrcdir, 'locales', locale)
charset = self.syslocales[locale]
if os.path.isfile(lpath):
ldef_name = locale
else:
ldef_name = _locale_def_re.sub(r'\g<1>\g<2>', locale)
lpath = os.path.join(localesrcdir, 'locales', ldef_name)
env = copy.deepcopy(dict(os.environ))
env['I18NPATH'] = localesrcdir
_logger.debug('Invocation environment: {0}'.format(env))
cmd = subprocess.run(['localedef',
'--force',
# These are overridden by a prefix env var.
# '--inputfile={0}'.format(lpath),
# '--charmap={0}'.format(os.path.join(localesrcdir, 'charmaps', charset)),
'--inputfile={0}'.format(ldef_name),
'--charmap={0}'.format(charset),
'--alias-file={0}'.format(os.path.join(self.chroot_base,
'usr', 'share', 'locale', 'locale.alias')),
'--prefix={0}'.format(self.chroot_base),
locale],
stdout = subprocess.PIPE,
stderr = subprocess.PIPE,
env = env)
_logger.info('Executed: {0}'.format(' '.join(cmd.args)))
if cmd.returncode != 0:
_logger.warning('Command returned non-zero status')
_logger.debug('Exit status: {0}'.format(str(cmd.returncode)))
for a in ('stdout', 'stderr'):
x = getattr(cmd, a)
if x:
_logger.debug('{0}: {1}'.format(a.upper(), x.decode('utf-8').strip()))
raise RuntimeError('Failed to render locales successfully')
cfg = os.path.join(self.chroot_base, 'etc', 'locale.conf')
# And now we write the variables.
# We have to strip out the section from the ini.
cfgbuf = io.StringIO()
self._localevars.write(cfgbuf, space_around_delimiters = False)
cfgbuf.seek(0, 0)
with open(cfg, 'w') as fh:
for line in cfgbuf.readlines():
if line.startswith('[BASE]') or line.strip() == '':
continue
fh.write(line)
os.chmod(cfg, 0o0644)
os.chown(cfg, 0, 0)
_logger.info('Wrote: {0}'.format(cfg))
return(None)


class Timezone(object):
def __init__(self, chroot_base, timezone):
self.tz = timezone.strip().replace('.', '/')
self.chroot_base = chroot_base

def _verify(self):
tzfilebase = os.path.join('usr', 'share', 'zoneinfo', self.tz)
tzfile = os.path.join(self.chroot_base, tzfilebase)
if not os.path.isfile(tzfile):
_logger.error('Timezone {0} does not have a matching timezone file on target system.'.format(self.tz))
raise ValueError('Invalid timezone')
return(tzfilebase)

def apply(self):
tzsrcfile = os.path.join('/', self._verify())
tzdestfile = os.path.join(self.chroot_base, 'etc', 'localtime')
if os.path.isfile(tzdestfile):
os.remove(tzdestfile)
os.symlink(tzsrcfile, tzdestfile)
_logger.info('Created symlink: {0} => {1}'.format(tzsrcfile, tzdestfile))
return(None)

74
aif/system/services.py Normal file
View File

@ -0,0 +1,74 @@
import logging
import os
import pathlib
import re
##
import aif.utils


_logger = logging.getLogger(__name__)


_svc_suffixes = ('service', 'socket', 'device', 'mount', 'automount', 'swap', 'target',
'path', 'timer', 'slice', 'scope')
_svc_re = re.compile(r'\.({0})$'.format('|'.join(_svc_suffixes)))


class Service(object):
def __init__(self, service_xml):
self.xml = service_xml
self.slice = None
self.unit_file = None
self.dest_file = None
self.name = service_xml.text.strip()
self.enabled = aif.utils.xmlBool(self.xml.attrib.get('status', 'true'))
p = pathlib.Path(self.name)
suffix = p.suffix.lstrip('.')
if suffix in _svc_suffixes:
self.type = suffix
self.name = _svc_re.sub('', self.name)
else:
self.type = 'service'
s = self.name.split('@', 1)
if len(s) > 1:
self.name = s[0]
self.slice = s[1]
self.unit_file = '{0}@.{1}'.format(self.name, self.type)
self.dest_file = '{0}@{1}.{2}'.format(self.name, self.slice, self.type)
else:
self.unit_file = '{0}.{1}'.format(self.name, self.type)
self.dest_file = self.unit_file
if self.slice:
_logger.info('Initialized service: {0}@{1}'.format(self.name, self.slice))
else:
_logger.info('Initialized service: {0}'.format(self.name))
for a in ('name', 'slice', 'type', 'enabled'):
_logger.debug('{0}: {1}'.format(a.title(), getattr(self, a)))


class ServiceDB(object):
def __init__(self, chroot_base, services_xml):
self.xml = services_xml
self.chroot_base = chroot_base
self.systemd_sys = os.path.join(self.chroot_base, 'usr', 'lib', 'systemd', 'system')
self.systemd_host = os.path.join(self.chroot_base, 'etc', 'systemd', 'system')
self.services = []
for service_xml in self.xml.findall('service'):
svc = Service(service_xml)
self.services.append(svc)

def apply(self):
for svc in self.services:
dest_path = os.path.join(self.systemd_host, svc.dest_file)
src_path = os.path.join(self.systemd_sys, svc.unit_file)
if svc.enabled:
if not os.path.isfile(dest_path):
os.symlink(src_path, dest_path)
_logger.info('Created symlink: {0} => {1}'.format(src_path, dest_path))
_logger.debug('{0} enabled'.format(svc.name))
else:
if os.path.exists(dest_path):
os.remove(dest_path)
_logger.info('Removed file/symlink: {0}'.format(dest_path))
_logger.debug('{0} disabled'.format(svc.name))
return(None)

526
aif/system/users.py Normal file
View File

@ -0,0 +1,526 @@
# There isn't a python package that can manage *NIX users (well), unfortunately.
# So we do something stupid:
# https://www.tldp.org/LDP/sag/html/adduser.html
# https://unix.stackexchange.com/a/153227/284004
# https://wiki.archlinux.org/index.php/users_and_groups#File_list

import datetime
import logging
import os
import re
import shutil
import warnings
##
import passlib.context
import passlib.hash
##
import aif.utils
import aif.constants_fallback


_logger = logging.getLogger(__name__)


_skipline_re = re.compile(r'^\s*(#|$)')
_now = datetime.datetime.utcnow()
_epoch = datetime.datetime.fromtimestamp(0)
_since_epoch = _now - _epoch


class Group(object):
def __init__(self, group_xml):
self.xml = group_xml
self.name = None
self.gid = None
self.password = None
self.create = False
self.admins = set()
self.members = set()
self.group_entry = []
self.gshadow_entry = []
if self.xml is not None:
self.name = self.xml.attrib['name']
self.gid = self.xml.attrib.get('gid')
# TODO: add to XML?
self.password = Password(self.xml.attrib.get('password'), gshadow = True)
self.password.detectHashType()
self.create = aif.utils.xmlBool(self.xml.attrib.get('create', 'false'))
if self.gid:
self.gid = int(self.gid)
else:
if not self.password:
self.password = '!!'
_logger.info('Rendered G=group entry')
for a in ('name', 'gid', 'password', 'create'):
_logger.debug('{0}: {1}'.format(a.title(), getattr(self, a)))

def genFileLine(self):
if not self.gid:
_logger.error('Group objects must have a gid set before their group/gshadow entries can be generated')
raise RuntimeError('Need GID')
# group(5)
self.group_entry = [self.name, # Group name
'x', # Password, normally, but we use shadow for this
self.gid, # GID
','.join(self.members)] # Comma-separated members
# gshadow(5)
self.gshadow_entry = [self.name, # Group name
(self.password.hash if self.password.hash else '!!'), # Password hash (if it has one)
','.join(self.admins), # Users with administrative control of group
','.join(self.members)] # Comma-separated members of group
_logger.debug('Rendered group entry: {0}'.format(self.group_entry))
_logger.debug('Rendered gshadow entry: {0}'.format(self.gshadow_entry))
return(None)

def parseGroupLine(self, line):
groupdict = dict(zip(['name', 'password', 'gid', 'members'],
line.split(':')))
members = [i for i in groupdict['members'].split(',') if i.strip() != '']
if members:
self.members = set(members)
self.gid = int(groupdict['gid'])
self.name = groupdict['name']
_logger.info('Parsed group line.')
for a in ('name', 'gid', 'members'):
_logger.debug('{0}: {1}'.format(a.title(), getattr(self, a)))
return(None)

def parseGshadowLine(self, line):
groupdict = dict(zip(['name', 'password', 'admins', 'members'],
line.split(':')))
self.password = Password(None, gshadow = True)
self.password.hash = groupdict['password']
self.password.detectHashType()
admins = [i for i in groupdict['admins'].split(',') if i.strip() != '']
members = [i for i in groupdict['members'].split(',') if i.strip() != '']
if admins:
self.admins = set(admins)
if members:
self.members = set(members)
_logger.info('Parsed gshadow line.')
for a in ('password', 'admins', 'members'):
_logger.debug('{0}: {1}'.format(a.title(), getattr(self, a)))
return(None)


class Password(object):
def __init__(self, password_xml, gshadow = False):
self.xml = password_xml
self._is_gshadow = gshadow
if not self._is_gshadow:
self.disabled = False
self.password = None
self.hash = None
self.hash_type = None
self.hash_rounds = None
self._pass_context = passlib.context.CryptContext(schemes = ['{0}_crypt'.format(i)
for i in
aif.constants_fallback.CRYPT_SUPPORTED_HASHTYPES])
if self.xml is not None:
if not self._is_gshadow:
self.disabled = aif.utils.xmlBool(self.xml.attrib.get('locked', 'false'))
self._password_xml = self.xml.xpath('passwordPlain|passwordHash')
if self._password_xml:
self._password_xml = self._password_xml[0]
if self._password_xml.tag == 'passwordPlain':
self.password = self._password_xml.text.strip()
self.hash_type = self._password_xml.attrib.get('hashType', 'sha512')
# 5000 rounds is the crypt(3) default.
self.hash_rounds = int(self._password_xml.get('rounds', 5000))
self._pass_context.update(default = '{0}_crypt'.format(self.hash_type))
self.hash = passlib.hash.sha512_crypt.using(rounds = self.hash_rounds).hash(self.password)
else:
self.hash = self._password_xml.text.strip()
self.hash_type = self._password_xml.attrib.get('hashType', '(detect)')
if self.hash_type == '(detect)':
self.detectHashType()
else:
if not self._is_gshadow:
self.disabled = True
self.hash = ''

def detectHashType(self):
if not self.hash.startswith('$'):
if not self._is_gshadow:
self.disabled = True
self.hash = re.sub(r'^[^$]+($)?', r'\g<1>', self.hash)
if self.hash not in ('', None):
self.hash_type = re.sub(r'_crypt$', '', self._pass_context.identify(self.hash))
if not self.hash_type:
_logger.warning('Unable to detect hash type for string {0}'.format(self.hash))
warnings.warn('Could not determine hash type')
return(None)


class User(object):
def __init__(self, user_xml):
self.xml = user_xml
self.name = None
self.uid = None
self.primary_group = None
self.password = None
self.sudo = None
self.sudoPassword = True
self.comment = None
self.shell = None
self.minimum_age = None
self.maximum_age = None
self.warning_period = None
self.inactive_period = None
self.expire_date = None
self.new = False
self.groups = []
self.passwd_entry = []
self.shadow_entry = []
self._initVals()

def _initVals(self):
if self.xml is None:
_logger.debug('Instantiated blank User object.')
# We manually assign these.
return(None)
self.name = self.xml.attrib['name']
# XML declared users are always new.
self.new = True
self.password = Password(self.xml.find('password'))
self.sudo = aif.utils.xmlBool(self.xml.attrib.get('sudo', 'false'))
self.sudoPassword = aif.utils.xmlBool(self.xml.attrib.get('sudoPassword', 'true'))
self.home = self.xml.attrib.get('home', '/home/{0}'.format(self.name))
self.uid = self.xml.attrib.get('uid')
if self.uid:
self.uid = int(self.uid)
self.primary_group = Group(None)
self.primary_group.name = self.xml.attrib.get('group', self.name)
self.primary_group.gid = self.xml.attrib.get('gid')
if self.primary_group.gid:
self.primary_group.gid = int(self.primary_group.gid)
self.primary_group.create = True
self.primary_group.members.add(self.name)
self.shell = self.xml.attrib.get('shell', '/bin/bash')
self.comment = self.xml.attrib.get('comment')
self.minimum_age = int(self.xml.attrib.get('minAge', 0))
self.maximum_age = int(self.xml.attrib.get('maxAge', 0))
self.warning_period = int(self.xml.attrib.get('warnDays', 0))
self.inactive_period = int(self.xml.attrib.get('inactiveDays', 0))
self.expire_date = self.xml.attrib.get('expireDate')
self.last_change = _since_epoch.days - 1
if self.expire_date:
# https://www.w3.org/TR/xmlschema-2/#dateTime
try:
self.expire_date = datetime.datetime.fromtimestamp(int(self.expire_date)) # It's an Epoch
except ValueError:
self.expire_date = re.sub(r'^[+-]', '', self.expire_date) # Strip the useless prefix
# Combine the offset into a strftime/strptime-friendly offset
self.expire_date = re.sub(r'([+-])([0-9]{2}):([0-9]{2})$', r'\g<1>\g<2>\g<3>', self.expire_date)
_common = '%Y-%m-%dT%H:%M:%S'
for t in ('{0}%z'.format(_common), '{0}Z'.format(_common), '{0}.%f%z'.format(_common)):
try:
self.expire_date = datetime.datetime.strptime(self.expire_date, t)
break
except ValueError:
continue
for group_xml in self.xml.findall('xGroup'):
g = Group(group_xml)
g.members.add(self.name)
self.groups.append(g)
_logger.info('User object for {0} instantiated.'.format(self.name))
return(None)

def genFileLine(self):
if not all((self.uid, self.primary_group.gid)):
_logger.error(('User objects must have a uid and primary_group.gid set before their passwd/shadow entries '
'can be generated'))
raise RuntimeError('Need UID/primary_group.gid')
# passwd(5)
self.passwd_entry = [self.name, # Username
'x', # self.password.hash is not used because shadow, but this would be password
str(self.uid), # UID
str(self.gid), # GID
(self.comment if self.comment else ''), # GECOS
self.home, # Home directory
self.shell] # Shell
# shadow(5)
self.shadow_entry = [self.name, # Username
self.password.hash, # Password hash (duh)
(str(self.last_change) if self.last_change else ''), # Days since epoch last passwd change
(str(self.minimum_age) if self.minimum_age else '0'), # Minimum password age
(str(self.maximum_age) if self.maximum_age else ''), # Maximum password age
(str(self.warning_period) if self.warning_period else ''), # Passwd expiry warning period
(str(self.inactive_period) if self.inactive_period else ''), # Password inactivity period
(str((self.expire_date - _epoch).days) if self.expire_date else ''), # Expiration date
''] # "Reserved"
_logger.debug('Rendered passwd entry: {0}'.format(self.passwd_entry))
_logger.debug('Rendered shadow entry: {0}'.format(self.shadow_entry))
return(None)

def parsePasswdLine(self, line):
userdict = dict(zip(['name', 'password', 'uid', 'gid', 'comment', 'home', 'shell'],
line.split(':')))
self.name = userdict['name']
self.primary_group = int(userdict['gid']) # This gets transformed by UserDB() to the proper Group() obj
self.uid = int(userdict['uid'])
for k in ('home', 'shell'):
if userdict[k].strip() != '':
setattr(self, k, userdict[k])
_logger.debug('Parsed passwd entry: {0}'.format(userdict))
return(None)

def parseShadowLine(self, line):
shadowdict = dict(zip(['name', 'password', 'last_change', 'minimum_age', 'maximum_age', 'warning_period',
'inactive_period', 'expire_date', 'RESERVED'],
line.split(':')))
self.name = shadowdict['name']
self.password = Password(None)
self.password.hash = shadowdict['password']
self.password.detectHashType()
for i in ('last_change', 'minimum_age', 'maximum_age', 'warning_period', 'inactive_period'):
if shadowdict[i].strip() == '':
setattr(self, i, None)
else:
setattr(self, i, int(shadowdict[i]))
if shadowdict['expire_date'].strip() == '':
self.expire_date = None
else:
self.expire_date = datetime.datetime.fromtimestamp(shadowdict['expire_date'])
_logger.debug('Parsed shadow entry: {0}'.format(shadowdict))
return(shadowdict)


class UserDB(object):
def __init__(self, chroot_base, rootpass_xml, users_xml):
self.rootpass = Password(rootpass_xml)
self.xml = users_xml
self.chroot_base = chroot_base
self.sys_users = []
self.sys_groups = []
self.new_users = []
self.new_groups = []
self._valid_uids = {'sys': set(),
'user': set()}
self._valid_gids = {'sys': set(),
'user': set()}
self.passwd_file = os.path.join(chroot_base, 'etc', 'passwd')
self.shadow_file = os.path.join(chroot_base, 'etc', 'shadow')
self.group_file = os.path.join(chroot_base, 'etc', 'group')
self.gshadow_file = os.path.join(chroot_base, 'etc', 'gshadow')
self.logindefs_file = os.path.join(chroot_base, 'etc', 'login.defs')
self.login_defaults = {}
self._parseLoginDefs()
self._parseShadow()
self._parseXML()

def _parseLoginDefs(self):
with open(self.logindefs_file, 'r') as fh:
logindefs = fh.read().splitlines()
for line in logindefs:
if _skipline_re.search(line):
continue
l = [i.strip() for i in line.split(None, 1)]
if len(l) < 2:
l.append(None)
self.login_defaults[l[0]] = l[1]
# Convert to native objects
for k in ('FAIL_DELAY', 'PASS_MAX_DAYS', 'PASS_MIN_DAYS', 'PASS_WARN_AGE', 'UID_MIN', 'UID_MAX',
'SYS_UID_MIN', 'SYS_UID_MAX', 'GID_MIN', 'GID_MAX', 'SYS_GID_MIN', 'SYS_GID_MAX', 'LOGIN_RETRIES',
'LOGIN_TIMEOUT', 'LASTLOG_UID_MAX', 'MAX_MEMBERS_PER_GROUP', 'SHA_CRYPT_MIN_ROUNDS',
'SHA_CRYPT_MAX_ROUNDS', 'SUB_GID_MIN', 'SUB_GID_MAX', 'SUB_GID_COUNT', 'SUB_UID_MIN', 'SUB_UID_MAX',
'SUB_UID_COUNT'):
if k in self.login_defaults.keys():
self.login_defaults[k] = int(self.login_defaults[k])
for k in ('TTYPERM', ):
if k in self.login_defaults.keys():
self.login_defaults[k] = int(self.login_defaults[k], 8)
for k in ('ERASECHAR', 'KILLCHAR', 'UMASK'):
if k in self.login_defaults.keys():
v = self.login_defaults[k]
if v.startswith('0x'):
v = int(v, 16)
elif v.startswith('0'):
v = int(v, 8)
else:
v = int(v)
self.login_defaults[k] = v
for k in ('LOG_UNKFAIL_ENAB', 'LOG_OK_LOGINS', 'SYSLOG_SU_ENAB', 'SYSLOG_SG_ENAB', 'DEFAULT_HOME',
'CREATE_HOME', 'USERGROUPS_ENAB', 'MD5_CRYPT_ENAB'):
if k in self.login_defaults.keys():
v = self.login_defaults[k].lower()
self.login_defaults[k] = (True if v == 'yes' else False)
_logger.debug('Parsed login defaults config: {0}'.format(self.login_defaults))
return(None)

def _parseShadow(self):
sys_shadow = {}
users = {}
groups = {}
for f in ('shadow', 'passwd', 'group', 'gshadow'):
sys_shadow[f] = []
with open(getattr(self, '{0}_file'.format(f)), 'r') as fh:
for line in fh.read().splitlines():
if _skipline_re.search(line):
continue
sys_shadow[f].append(line)
for groupline in sys_shadow['group']:
g = Group(None)
g.parseGroupLine(groupline)
groups[g.gid] = g
for gshadowline in sys_shadow['gshadow']:
g = [i for i in groups.values() if i.name == gshadowline.split(':')[0]][0]
g.parseGshadowLine(gshadowline)
self.sys_groups.append(g)
self.new_groups.append(g)
for userline in sys_shadow['passwd']:
u = User(None)
u.parsePasswdLine(userline)
users[u.name] = u
for shadowline in sys_shadow['shadow']:
u = users[shadowline.split(':')[0]]
u.parseShadowLine(shadowline)
self.sys_users.append(u)
self.new_users.append(u)
# Now that we've native-ized the above, we need to do some associations.
for user in self.sys_users:
for group in self.sys_groups:
if not isinstance(user.primary_group, Group) and user.primary_group == group.gid:
user.primary_group = group
if user.name in group.members and group != user.primary_group:
user.groups.append(group)
if self.rootpass:
rootuser = users['root']
rootuser.password = self.rootpass
rootuser.password.detectHashType()
return(None)

def _parseXML(self):
for user_xml in self.xml.findall('user'):
u = User(user_xml)
# TODO: system accounts?
if u.name in [i.name for i in self.new_users]:
_logger.warning('User {0} already specified; skipping to avoid duplicate conflicts.'.format(u.name))
warnings.warn('User already specified')
continue
if not u.uid:
u.uid = self.getAvailUID()
if not u.primary_group.gid:
new_group = [i.name for i in self.new_groups]
if u.primary_group.name not in new_group:
if not u.primary_group.gid:
u.primary_group.gid = self.getAvailGID()
self.new_groups.append(u.primary_group)
else:
u.primary_group = new_group[0]
for idx, g in enumerate(u.groups[:]):
new_group = [i.name for i in self.new_groups]
if g.name not in new_group:
if not g.gid:
g.gid = self.getAvailGID()
self.new_groups.append(g)
else:
if not g.create:
u.groups[idx] = new_group[0]
self.new_users.append(u)
return(None)

def getAvailUID(self, system = False):
if not self.login_defaults:
self._parseLoginDefs()
if system:
def_min = int(self.login_defaults.get('SYS_UID_MIN', 500))
def_max = int(self.login_defaults.get('SYS_UID_MAX', 999))
k = 'sys'
else:
def_min = int(self.login_defaults.get('UID_MIN', 1000))
def_max = int(self.login_defaults.get('UID_MAX', 60000))
k = 'user'
if not self._valid_uids[k]:
self._valid_uids[k] = set(i for i in range(def_min, (def_max + 1)))
current_uids = set(i.uid for i in self.new_users)
uid = min(self._valid_uids[k] - current_uids)
return(uid)

def getAvailGID(self, system = False):
if not self.login_defaults:
self._parseLoginDefs()
if system:
def_min = int(self.login_defaults.get('SYS_GID_MIN', 500))
def_max = int(self.login_defaults.get('SYS_GID_MAX', 999))
k = 'sys'
else:
def_min = int(self.login_defaults.get('GID_MIN', 1000))
def_max = int(self.login_defaults.get('GID_MAX', 60000))
k = 'user'
if not self._valid_gids[k]:
self._valid_gids[k] = set(i for i in range(def_min, (def_max + 1)))
current_gids = set(i.gid for i in self.new_groups)
gid = min(self._valid_gids[k] - current_gids)
return(gid)

def writeConf(self):
# We shouldn't really use this, because root should be at the beginning.
users_by_name = sorted(self.new_users, key = lambda user: user.name)
# This automatically puts root first (uid = 0)
users_by_uid = sorted(self.new_users, key = lambda user: user.uid)
# Ditto.
groups_by_name = sorted(self.new_groups, key = lambda group: group.name)
groups_by_gid = sorted(self.new_groups, key = lambda group: group.gid)
for x in (self.new_users, self.new_groups):
for i in x:
i.genFileLine()
for f in (self.passwd_file, self.shadow_file, self.group_file, self.gshadow_file):
backup = '{0}-'.format(f)
shutil.copy2(f, backup)
_logger.info('Wrote: {0}'.format(backup))
with open(self.passwd_file, 'w') as fh:
for u in users_by_uid:
fh.write(':'.join(u.passwd_entry))
fh.write('\n')
_logger.info('Wrote: {0}'.format(self.passwd_file))
with open(self.shadow_file, 'w') as fh:
for u in self.new_users:
fh.write(':'.join(u.shadow_entry))
fh.write('\n')
_logger.info('Wrote: {0}'.format(self.shadow_file))
with open(self.group_file, 'w') as fh:
for g in groups_by_gid:
fh.write(':'.join(g.group_entry))
fh.write('\n')
_logger.info('Wrote: {0}'.format(self.group_file))
with open(self.gshadow_file, 'w') as fh:
for g in self.new_users:
fh.write(':'.join(g.gshadow_entry))
fh.write('\n')
_logger.info('Wrote: {0}'.format(self.gshadow_file))
for u in self.new_users:
if u.new:
homedir = os.path.join(self.chroot_base, u.home)
# We only set perms for the homedir itself. It's up to the user to specify in a post script if this
# needs to be different.
if os.path.isdir(homedir):
stats = os.stat(homedir)
_logger.warning('Homedir {0} for user {1} already exists; original stat: {2}'.format(homedir,
u.name,
stats))
os.makedirs(homedir, exist_ok = True)
shutil.copytree(os.path.join(self.chroot_base, 'etc', 'skel'), homedir)
os.chown(homedir, u.uid, u.primary_group.gid)
os.chmod(homedir, 0o0750)
for root, dirs, files in os.walk(homedir):
for d in dirs:
dpath = os.path.join(root, d)
os.chown(dpath, u.uid, u.primary_group.gid)
os.chmod(dpath, 0o0700)
for f in files:
fpath = os.path.join(root, f)
os.chown(fpath, u.uid, u.primary_group.gid)
os.chmod(fpath, 0o0600)
if not u.sudo:
continue
sudo_file = os.path.join(self.chroot_base, 'etc', 'sudoers.d', u.name)
with open(sudo_file, 'w') as fh:
fh.write(('# Generated by AIF-NG.\n'
'Defaults:{0} !lecture\n'
'{0} ALL=(ALL) {1}ALL\n').format(u.name,
('NOPASSWD: ' if not u.sudoPassword else '')))
os.chown(sudo_file, 0, 0)
os.chmod(sudo_file, 0o0440)
_logger.info('Wrote: {0}'.format(sudo_file))
return(None)

342
aif/utils/__init__.py Normal file
View File

@ -0,0 +1,342 @@
import logging
import math
import os
import pathlib
import re
import shlex
import subprocess
##
import psutil
##
from . import parser
from . import file_handler
from . import gpg_handler
from . import hash_handler
from . import sources


_logger = logging.getLogger('utils.__init__')


def checkMounted(devpath):
for p in psutil.disk_partitions(all = True):
if p.device == devpath:
_logger.error(('{0} is mounted at {1} but was specified as a target. '
'Cowardly refusing to run potentially destructive operations on it.').format(devpath,
p.mountpoint))
# TODO: raise only if not dryrun? Raise warning instead if so?
raise RuntimeError('Device mounted in live environment')
return(None)


def collapseKeys(d, keylist = None):
if not keylist:
keylist = []
for k, v in d.items():
if isinstance(v, dict):
keylist.append(k)
keylist = collapseKeys(v, keylist = keylist)
else:
keylist.append(k)
return(keylist)


def collapseValues(d, vallist = None):
if not vallist:
vallist = []
for k, v in d.items():
if isinstance(v, dict):
vallist = collapseValues(v, vallist = vallist)
else:
vallist.append(v)
return(vallist)


def hasBin(binary_name):
paths = []
for p in os.environ.get('PATH', '/usr/bin:/bin').split(':'):
if binary_name in os.listdir(os.path.realpath(p)):
return(os.path.join(p, binary_name))
return(False)


def hasSafeChunks(n):
if (n % 4) != 0:
return(False)
return(True)


def isPowerofTwo(n):
# So dumb.
isPowerOf2 = math.ceil(math.log(n, 2)) == math.floor(math.log(n, 2))
return(isPowerOf2)


# custom Jinja2 filters
def j2_isDict(value):
return(isinstance(value, dict))


def j2_isList(value):
return(isinstance(value, list))


j2_filters = {'isDict': j2_isDict,
'isList': j2_isList}
# end custom Jinja2 filters


def kernelCmdline(chroot_base = '/'):
cmds = {}
chroot_base = pathlib.PosixPath(chroot_base)
cmdline = chroot_base.joinpath('proc', 'cmdline')
if not os.path.isfile(cmdline):
return(cmds)
with open(cmdline, 'r') as fh:
raw_cmds = fh.read().strip()
for c in shlex.split(raw_cmds):
l = c.split('=', 1)
if len(l) < 2:
l.append(None)
cmds[l[0]] = l[1]
return(cmds)


def kernelFilesystems():
# I wish there was a better way of doing this.
# https://unix.stackexchange.com/a/98680
FS_FSTYPES = ['swap']
with open('/proc/filesystems', 'r') as fh:
for line in fh.readlines():
l = [i.strip() for i in line.split()]
if not l:
continue
if len(l) == 1:
FS_FSTYPES.append(l[0])
else:
FS_FSTYPES.append(l[1])
_logger.debug('Built list of pre-loaded filesystem types: {0}'.format(','.join(FS_FSTYPES)))
_mod_dir = os.path.join('/lib/modules',
os.uname().release,
'kernel/fs')
_strip_mod_suffix = re.compile(r'(?P<fsname>)\.ko(\.(x|g)?z)?$', re.IGNORECASE)
try:
for i in os.listdir(_mod_dir):
path = os.path.join(_mod_dir, i)
fs_name = None
if os.path.isdir(path):
fs_name = i
elif os.path.isfile(path):
mod_name = _strip_mod_suffix.search(i)
fs_name = mod_name.group('fsname')
if fs_name:
# The kernel *probably* has autoloading enabled, but in case it doesn't...
if os.getuid() == 0:
cmd = subprocess.run(['modprobe', fs_name], stderr = subprocess.PIPE, stdout = subprocess.PIPE)
_logger.info('Executed: {0}'.format(' '.join(cmd.args)))
if cmd.returncode != 0:
_logger.warning('Command returned non-zero status')
_logger.debug('Exit status: {0}'.format(str(cmd.returncode)))
for a in ('stdout', 'stderr'):
x = getattr(cmd, a)
if x:
_logger.debug('{0}: {1}'.format(a.upper(), x.decode('utf-8').strip()))
FS_FSTYPES.append(fs_name)
except FileNotFoundError:
# We're running on a kernel that doesn't have modules
_logger.info('Kernel has no modules available')
pass
FS_FSTYPES = sorted(list(set(FS_FSTYPES)))
_logger.debug('Generated full list of FS_FSTYPES: {0}'.format(','.join(FS_FSTYPES)))
return(FS_FSTYPES)


def xmlBool(xmlobj):
# https://bugs.launchpad.net/lxml/+bug/1850221
if isinstance(xmlobj, bool):
return(xmlobj)
if xmlobj.lower() in ('1', 'true'):
return(True)
elif xmlobj.lower() in ('0', 'false'):
return(False)
else:
return(None)


class _Sizer(object):
# We use different methods for converting between storage and BW, and different multipliers for each subtype.
# https://stackoverflow.com/a/12912296/733214
# https://stackoverflow.com/a/52684562/733214
# https://stackoverflow.com/questions/5194057/better-way-to-convert-file-sizes-in-python
# https://en.wikipedia.org/wiki/Orders_of_magnitude_(data)
# https://en.wikipedia.org/wiki/Binary_prefix
# 'decimal' is base-10, 'binary' is base-2. (Duh.)
# "b" = bytes, "n" = given value, and "u" = unit suffix's key in below notes.
storageUnits = {
'decimal': { # n * (10 ** u) = b; b / (10 ** u) = u
0: (None, 'B', 'byte'),
3: ('k', 'kB', 'kilobyte'),
6: ('M', 'MB', 'megabyte'),
9: ('G', 'GB', 'gigabyte'),
12: ('T', 'TB', 'teraybte'),
13: ('P', 'PB', 'petabyte'), # yeah, right.
15: ('E', 'EB', 'exabyte'),
18: ('Z', 'ZB', 'zettabyte'),
19: ('Y', 'YB', 'yottabyte')
},
'binary': { # n * (2 ** u) = b; b / (2 ** u) = u
-1: ('nybble', 'nibble', 'nyble', 'half-byte', 'tetrade', 'nibble'),
10: ('Ki', 'KiB', 'kibibyte'),
20: ('Mi', 'MiB', 'mebibyte'),
30: ('Gi', 'GiB', 'gibibyte'),
40: ('Ti', 'TiB', 'tebibyte'),
50: ('Pi', 'PiB', 'pebibyte'),
60: ('Ei', 'EiB', 'exbibyte'),
70: ('Zi', 'ZiB', 'zebibyte'),
80: ('Yi', 'YiB', 'yobibyte')
}}
# https://en.wikipedia.org/wiki/Bit#Multiple_bits - note that 8 bits = 1 byte
bwUnits = {
'decimal': { # n * (10 ** u) = b; b / (10 ** u) = u
0: (None, 'b', 'bit'),
3: ('k', 'kb', 'kilobit'),
6: ('M', 'Mb', 'megabit'),
9: ('G', 'Gb', 'gigabit'),
12: ('T', 'Tb', 'terabit'),
13: ('P', 'Pb', 'petabit'),
15: ('E', 'Eb', 'exabit'),
18: ('Z', 'Zb', 'zettabit'),
19: ('Y', 'Yb', 'yottabit')
},
'binary': { # n * (2 ** u) = b; b / (2 ** u) = u
-1: ('semi-octet', 'quartet', 'quadbit'),
10: ('Ki', 'Kib', 'kibibit'),
20: ('Mi', 'Mib', 'mebibit'),
30: ('Gi', 'Gib', 'gibibit'),
40: ('Ti', 'Tib', 'tebibit'),
50: ('Pi', 'Pib', 'pebibit'),
60: ('Ei', 'Eib', 'exbibit'),
70: ('Zi', 'Zib', 'zebibit'),
80: ('Yi', 'Yib', 'yobibit')
}}
valid_storage = []
for unit_type, convpair in storageUnits.items():
for f, l in convpair.items():
for suffix in l:
if suffix not in valid_storage and suffix:
valid_storage.append(suffix)
valid_bw = []
for unit_type, convpair in bwUnits.items():
for f, l in convpair.items():
for suffix in l:
if suffix not in valid_bw and suffix:
valid_bw.append(suffix)

def __init__(self):
pass

def convert(self, n, suffix):
conversion = {}
if suffix in self.valid_storage:
conversion.update(self.convertStorage(n, suffix))
b = conversion['B'] * 8
conversion.update(self.convertBW(b, 'b'))
elif suffix in self.valid_bw:
conversion.update(self.convertBW(n, suffix))
b = conversion['b'] / 8
conversion.update(self.convertStorage(b, 'B'))
return(conversion)

def convertBW(self, n, suffix, target = None):
inBits = None
conversion = None
base_factors = []
if suffix not in self.valid_bw:
_logger.error('Suffix {0} is invalid; must be one of {1}'.format(suffix, ','.join(self.valid_bw)))
raise ValueError('suffix is not a valid unit notation for this conversion')
if target and target not in self.valid_bw:
_logger.error('Target {0} is invalid; must be one of {1}'.format(target, ','.join(self.valid_bw)))
raise ValueError('target is not a valid unit notation for this conversion')
for (_unit_type, _base) in (('decimal', 10), ('binary', 2)):
if target and base_factors:
break
for u, suffixes in self.bwUnits[_unit_type].items():
if all((target, inBits, base_factors)):
break
if suffix in suffixes:
inBits = n * float(_base ** u)
if target and target in suffixes:
base_factors.append((_base, u, suffixes[1]))
elif not target:
base_factors.append((_base, u, suffixes[1]))
if target:
conversion = float(inBits) / float(base_factors[0][0] ** base_factors[0][1])
else:
if not isinstance(conversion, dict):
conversion = {}
for base, factor, suffix in base_factors:
conversion[suffix] = float(inBits) / float(base ** factor)
return(conversion)

def convertStorage(self, n, suffix, target = None):
inBytes = None
conversion = None
base_factors = []
if suffix not in self.valid_storage:
_logger.error('Suffix {0} is invalid; must be one of {1}'.format(suffix, ','.join(self.valid_storage)))
raise ValueError('suffix is not a valid unit notation for this conversion')
if target and target not in self.valid_storage:
_logger.error('Target {0} is invalid; must be one of {1}'.format(target, ','.join(self.valid_storage)))
raise ValueError('target is not a valid unit notation for this conversion')
for (_unit_type, _base) in (('decimal', 10), ('binary', 2)):
if target and base_factors:
break
for u, suffixes in self.storageUnits[_unit_type].items():
if all((target, inBytes, base_factors)):
break
if suffix in suffixes:
inBytes = n * float(_base ** u)
if target and target in suffixes:
base_factors.append((_base, u, suffixes[1]))
elif not target:
base_factors.append((_base, u, suffixes[1]))
if target:
conversion = float(inBytes) / float(base_factors[0][0] ** base_factors[0][1])
else:
if not isinstance(conversion, dict):
conversion = {}
for base, factor, suffix in base_factors:
conversion[suffix] = float(inBytes) / float(base ** factor)
return(conversion)


size = _Sizer()


# We do this as base level so they aren't compiled on every invocation/instantiation.
# Unfortunately it has to be at the bottom so we can call the instantiated _Sizer() class.
# parted lib can do SI or IEC. So can we.
_pos_re = re.compile((r'^(?P<pos_or_neg>-|\+)?\s*'
r'(?P<size>[0-9]+)\s*'
# empty means size in sectors
r'(?P<pct_unit_or_sct>%|{0}|)\s*$'.format('|'.join(size.valid_storage))
))


def convertSizeUnit(pos):
orig_pos = pos
pos = _pos_re.search(pos)
if pos:
pos_or_neg = (pos.group('pos_or_neg') if pos.group('pos_or_neg') else None)
if pos_or_neg == '+':
from_beginning = True
elif pos_or_neg == '-':
from_beginning = False
else:
from_beginning = pos_or_neg
_size = int(pos.group('size'))
amt_type = pos.group('pct_unit_or_sct').strip()
else:
_logger.error('Size {0} is invalid; did not match {1}'.format(orig_pos, _pos_re.pattern))
raise ValueError('Invalid size specified')
return((from_beginning, _size, amt_type))

56
aif/utils/file_handler.py Normal file
View File

@ -0,0 +1,56 @@
import os
import pathlib


class File(object):
def __init__(self, file_path):
self.orig_path = file_path
self.fullpath = os.path.abspath(os.path.expanduser(self.orig_path))
self.path_rel = pathlib.PurePosixPath(self.orig_path)
self.path_full = pathlib.PurePosixPath(self.fullpath)

def __str__(self):
return(self.fullpath)


class Directory(object):
def __init__(self, dir_path):
self.orig_path = dir_path
self.fullpath = os.path.abspath(os.path.expanduser(self.orig_path))
self.path_rel = pathlib.PurePosixPath(self.orig_path)
self.path_full = pathlib.PurePosixPath(self.fullpath)
self.files = []
self.dirs = []

def __str__(self):
return(self.fullpath)

def populateFilesDirs(self, recursive = False, native = False):
if not recursive:
for i in os.listdir(self.fullpath):
if os.path.isdir(os.path.join(self.fullpath, i)):
self.dirs.append(i)
elif os.path.isfile(os.path.join(self.fullpath, i)):
if not native:
self.files.append(i)
else:
self.files.append(File(i))
else:
for root, dirs, files in os.walk(self.fullpath):
for f in files:
fpath = os.path.join(root, f)
relfpath = str(pathlib.PurePosixPath(fpath).relative_to(self.path_full))
if not native:
self.files.append(relfpath)
else:
self.files.append(relfpath)
for d in dirs:
dpath = os.path.join(root, d)
reldpath = str(pathlib.PurePosixPath(dpath).relative_to(self.path_full))
self.dirs.append(reldpath)
if root not in self.dirs:
self.dirs.append(root)
if not native:
self.dirs.sort()
self.files.sort()
return(None)

402
aif/utils/gpg_handler.py Normal file
View File

@ -0,0 +1,402 @@
import copy
import io
import logging
import os
import shutil
import tempfile
##
import gpg
import gpg.errors


_logger = logging.getLogger(__name__)


class KeyEditor(object):
def __init__(self):
self.trusted = False
_logger.info('Key editor instantiated.')

def truster(self, kw, arg, *args, **kwargs):
_logger.debug('Key trust editor invoked:')
_logger.debug('Command: {0}'.format(kw))
_logger.debug('Argument: {0}'.format(arg))
if args:
_logger.debug('args: {0}'.format(','.join(args)))
if kwargs:
_logger.debug('kwargs: {0}'.format(kwargs))
if kw == 'GET_LINE':
if arg == 'keyedit.prompt':
if not self.trusted:
_logger.debug('Returning: "trust"')
return('trust')
else:
_logger.debug('Returning: "save"')
return('save')
elif arg == 'edit_ownertrust.value' and not self.trusted:
self.trusted = True
_logger.debug('Status changed to trusted')
_logger.debug('Returning: "4"')
return('4') # "Full"
else:
_logger.debug('Returning: "save"')
return('save')
return(None)


class GPG(object):
def __init__(self, home = None, primary_key = None, *args, **kwargs):
self.home = home
self.primary_key = primary_key
self.temporary = None
self.ctx = None
self._imported_keys = []
_logger.debug('Homedir: {0}'.format(self.home))
_logger.debug('Primary key: {0}'.format(self.primary_key))
if args:
_logger.debug('args: {0}'.format(','.join(args)))
if kwargs:
_logger.debug('kwargs: {0}'.format(kwargs))
_logger.info('Instantiated GPG class.')
self._initContext()

def _initContext(self):
if not self.home:
self.home = tempfile.mkdtemp(prefix = '.aif.', suffix = '.gpg')
self.temporary = True
_logger.debug('Set as temporary home.')
self.home = os.path.abspath(os.path.expanduser(self.home))
_logger.debug('Homedir finalized: {0}'.format(self.home))
if not os.path.isdir(self.home):
os.makedirs(self.home, exist_ok = True)
os.chmod(self.home, 0o0700)
_logger.info('Created {0}'.format(self.home))
self.ctx = gpg.Context(home_dir = self.home)
if self.temporary:
self.primary_key = self.createKey('AIF-NG File Verification Key',
sign = True,
force = True,
certify = True).fpr
self.primary_key = self.findKeyByID(self.primary_key, source = 'secret')
if self.primary_key:
_logger.debug('Found primary key in secret keyring: {0}'.format(self.primary_key.fpr))
else:
_logger.error('Could not find primary key in secret keyring: {0}'.format(self.primary_key))
raise RuntimeError('Primary key not found in secret keyring')
self.ctx.signers = [self.primary_key]
if self.ctx.signers:
_logger.debug('Signers set to: {0}'.format(','.join([k.fpr for k in self.ctx.signers])))
else:
raise _logger.error('Could not assign signing keys; signing set empty')
return(None)

def clean(self):
# This is mostly just to cleanup the stuff we did before.
_logger.info('Cleaning GPG home.')
self.primary_key = self.primary_key.fpr
if self.temporary:
self.primary_key = None
shutil.rmtree(self.home)
_logger.info('Deleted temporary GPG home: {0}'.format(self.home))
self.ctx = None
return(None)

def createKey(self, userid, *args, **kwargs):
# algorithm=None, expires_in=0, expires=True, sign=False, encrypt=False, certify=False,
# authenticate=False, passphrase=None, force=False
keyinfo = {'userid': userid,
'algorithm': kwargs.get('algorithm', 'rsa4096'),
'expires_in': kwargs.get('expires_in'),
'sign': kwargs.get('sign', True),
'encrypt': kwargs.get('encrypt', False),
'certify': kwargs.get('certify', False),
'authenticate': kwargs.get('authenticate', False),
'passphrase': kwargs.get('passphrase'),
'force': kwargs.get('force')}
_logger.debug('Key creation parameters: {0}'.format(keyinfo))
if args:
_logger.debug('args: {0}'.format(','.join(args)))
if kwargs:
_logger.debug('kwargs: {0}'.format(kwargs))
if not keyinfo['expires_in']:
del(keyinfo['expires_in'])
keyinfo['expires'] = False
k = self.ctx.create_key(**keyinfo)
_logger.info('Created key: {0}'.format(k.fpr))
_logger.debug('Key info: {0}'.format(k))
return(k)

def findKey(self, searchstr, secret = False, local = True, remote = True,
secret_only = False, keyring_import = False, *args, **kwargs):
fltr = 0
if secret:
fltr = fltr | gpg.constants.KEYLIST_MODE_WITH_SECRET
_logger.debug('Added "secret" to filter; new filter value: {0}'.format(fltr))
if local:
fltr = fltr | gpg.constants.KEYLIST_MODE_LOCAL
_logger.debug('Added "local" to filter; new filter value: {0}'.format(fltr))
if remote:
fltr = fltr | gpg.constants.KEYLIST_MODE_EXTERN
_logger.debug('Added "remote" to filter; new filter value: {0}'.format(fltr))
if args:
_logger.debug('args: {0}'.format(','.join(args)))
if kwargs:
_logger.debug('kwargs: {0}'.format(kwargs))
keys = [k for k in self.ctx.keylist(pattern = searchstr, secret = secret_only, mode = fltr)]
_logger.info('Found {0} keys'.format(len(keys)))
if keys:
_logger.debug('Found keys: {0}'.format(keys))
else:
_logger.warning('Found no keys.')
if keyring_import:
_logger.debug('Importing enabled; importing found keys.')
self.importKeys(keys, native = True)
return(keys)

def findKeyByID(self, key_id, source = 'remote', keyring_import = False, *args, **kwargs):
# So .get_key() CAN get a remote key from a keyserver... but you can't have ANY other keylist modes defined.
# Ugh.
sources = {'remote': gpg.constants.KEYLIST_MODE_EXTERN,
'local': gpg.constants.KEYLIST_MODE_LOCAL,
'secret': gpg.constants.KEYLIST_MODE_WITH_SECRET}
if source not in sources.keys():
_logger.error('Invalid source parameter ({0}); must be one of: {1}'.format(source, sources.keys()))
raise ValueError('Invalid source parameter')
if args:
_logger.debug('args: {0}'.format(','.join(args)))
if kwargs:
_logger.debug('kwargs: {0}'.format(kwargs))
orig_mode = self.ctx.get_keylist_mode()
_logger.debug('Original keylist mode: {0}'.format(orig_mode))
self.ctx.set_keylist_mode(sources[source])
_logger.info('Set keylist mode: {0} ({1})'.format(source, sources[source]))
_logger.debug('Searching for key ID: {0}'.format(key_id))
try:
key = self.ctx.get_key(key_id, secret = (True if source == 'secret' else False))
_logger.info('Found key object for {0}'.format(key_id))
_logger.debug('Found key: {0}'.format(key))
except gpg.errors.KeyNotFound:
key = None
_logger.warning('Found no keys.')
self.ctx.set_keylist_mode(orig_mode)
_logger.info('Restored keylist mode ({0})'.format(orig_mode))
if keyring_import and key:
_logger.debug('Importing enabled; importing found keys.')
self.importKeys(key, native = True)
return(key)

def getKey(self, key_id, secret = False, strict = False, *args, **kwargs):
key = None
if args:
_logger.debug('args: {0}'.format(','.join(args)))
if kwargs:
_logger.debug('kwargs: {0}'.format(kwargs))
try:
getattr(key_id, 'fpr')
_logger.info('Key specified is already a native key object.')
_logger.debug('Key: {0}'.format(key_id))
return(key_id)
except AttributeError:
if not strict:
_logger.debug('Strict mode disabled; attempting import of {0} first.'.format(key_id))
self.findKeyByID(key_id, keyring_import = True, **kwargs)
try:
key = self.ctx.get_key(key_id, secret = secret)
_logger.info('Found {0}.'.format(key_id))
_logger.debug('Key: {0}'.format(key))
except gpg.errors.KeyNotFound:
_logger.warning('Could not locate {0} in keyring'.format(key_id))
return(key)

def getKeyData(self, keydata, keyring_import = False, *args, **kwargs):
orig_keydata = keydata
if args:
_logger.debug('args: {0}'.format(','.join(args)))
if kwargs:
_logger.debug('kwargs: {0}'.format(kwargs))
if isinstance(keydata, str):
_logger.debug('String passed as keydata; converting to bytes.')
keydata = keydata.encode('utf-8')
buf = io.BytesIO(keydata)
_logger.info('Parsed {0} bytes; looking for key(s).'.format(buf.getbuffer().nbytes))
keys = [k for k in self.ctx.keylist(source = buf)]
_logger.info('Found {0} key(s) in data.'.format(len(keys)))
if keys:
_logger.debug('Keys found: {0}'.format(keys))
else:
_logger.warning('No keys found in data.')
buf.close()
if keyring_import:
_logger.debug('Importing enabled; importing found keys.')
self.importKeys(keys, native = True)
return((keys, orig_keydata))

def getKeyFile(self, keyfile, keyring_import = False, *args, **kwargs):
if args:
_logger.debug('args: {0}'.format(','.join(args)))
if kwargs:
_logger.debug('kwargs: {0}'.format(kwargs))
orig_keyfile = keyfile
keyfile = os.path.abspath(os.path.expanduser(keyfile))
_logger.info('Parsed absolute keyfile path: {0} => {1}'.format(orig_keyfile, keyfile))
with open(keyfile, 'rb') as fh:
rawkey_data = fh.read()
fh.seek(0, 0)
_logger.debug('Parsed {0} bytes; looking for key(s).'.format(len(rawkey_data)))
keys = [k for k in self.ctx.keylist(source = fh)]
_logger.info('Found {0} key(s) in data.'.format(len(keys)))
if keys:
_logger.debug('Keys found: {0}'.format(keys))
else:
_logger.warning('No keys found in data.')
if keyring_import:
_logger.debug('Importing enabled; importing found keys.')
self.importKeys(keys, native = True)
return((keys, rawkey_data))

def importKeys(self, keydata, native = False, local = True, remote = True, *args, **kwargs):
fltr = 0
orig_km = None
keys = []
if args:
_logger.debug('args: {0}'.format(','.join(args)))
if kwargs:
_logger.debug('kwargs: {0}'.format(kwargs))
if local:
fltr = fltr | gpg.constants.KEYLIST_MODE_LOCAL
_logger.debug('Added "local" to filter; new filter value: {0}'.format(fltr))
if remote:
fltr = fltr | gpg.constants.KEYLIST_MODE_EXTERN
_logger.debug('Added "remote" to filter; new filter value: {0}'.format(fltr))
if self.ctx.get_keylist_mode() != fltr:
orig_km = self.ctx.get_keylist_mode()
self.ctx.set_keylist_mode(fltr)
_logger.info(('Current keylist mode ({0}) doesn\'t match filter ({1}); '
'set to new mode.').format(orig_km, fltr))
if not native: # It's raw key data (.gpg, .asc, etc.).
_logger.info('Non-native keydata specified; parsing.')
formatted_keys = b''
if isinstance(keydata, str):
formatted_keys += keydata.encode('utf-8')
_logger.debug('Specified keydata was a string; converted to bytes.')
elif isinstance(keydata, list):
_logger.debug('Specified keydata was a list/list-like; iterating.')
for idx, k in enumerate(keydata):
_logger.debug('Parsing entry {0} of {1} entries.'.format((idx + 1), len(keydata)))
if isinstance(k, str):
formatted_keys += k.encode('utf-8')
_logger.debug('Keydata ({0}) was a string; converted to bytes.'.format((idx + 1)))
else:
_logger.debug('Keydata ({0}) was already in bytes.'.format((idx + 1)))
formatted_keys += k
else:
_logger.warning('Could not identify keydata reliably; unpredictable results ahead.')
formatted_keys = keydata
rslt = self.ctx.key_import(formatted_keys).imports
_logger.debug('Imported keys: {0}'.format(rslt))
for r in rslt:
k = self.ctx.get_key(r.fpr)
if k:
_logger.debug('Adding key to keylist: {0}'.format(k))
else:
_logger.warning('Could not find key ID {0}.'.format(r.fpr))
keys.append(k)
else: # It's a native Key() object (or a list of them).
_logger.info('Native keydata specified; parsing.')
if not isinstance(keydata, list):
_logger.debug('Specified keydata was not a list/list-like; fixing.')
keydata = [keydata]
keys = keydata
_logger.debug('Importing keys: {0}'.format(keys))
self.ctx.op_import_keys(keydata)
if orig_km:
self.ctx.set_keylist_mode(orig_km)
_logger.info('Restored keylist mode to {0}'.format(orig_km))
for k in keys:
_logger.info('Signing {0} with a local signature.'.format(k.fpr))
self.ctx.key_sign(k, local = True)
_logger.debug('Adding trust for {0}.'.format(k.fpr))
trusteditor = KeyEditor()
self.ctx.interact(k, trusteditor.truster)
return(None)

def verifyData(self, data, keys = None, strict = False, detached = None, *args, **kwargs):
results = {}
if args:
_logger.debug('args: {0}'.format(','.join(args)))
if kwargs:
_logger.debug('kwargs: {0}'.format(kwargs))
if keys:
_logger.info('Keys were specified.')
if not isinstance(keys, list):
keys = [self.getKey(keys, source = 'local')]
else:
keys = [self.getKey(k, source = 'local') for k in keys]
_logger.debug('Verifying against keys: {0}'.format(keys))
if isinstance(data, str):
data = data.encode('utf-8')
_logger.debug('Specified data was a string; converted to bytes.')
_logger.info('Verifying {0} bytes of data.'.format(len(data)))
fnargs = {'signed_data': data}
if detached:
_logger.info('Specified a detached signature.')
if isinstance(detached, str):
detached = detached.encode('utf-8')
_logger.debug('Specified signature was a string; converted to bytes.')
if not isinstance(detached, bytes) and not hasattr(detached, 'read'):
_logger.error('Detached signature was neither bytes nor a buffer-like object.')
raise TypeError('detached must be bytes or buffer-like object')
if isinstance(detached, bytes):
_logger.info('Signature length: {0} bytes'.format(len(detached)))
else:
_logger.info('Signature length: {0} bytes'.format(detached.getbuffer().nbytes))
fnargs['signature'] = detached
if strict:
_logger.debug('Strict mode enabled; data must be signed by ALL specified keys.')
fnargs['verify'] = keys
_logger.debug('Verifying with args: {0}'.format(fnargs))
results[None] = self.ctx.verify(**fnargs)
else:
if keys:
_logger.debug('Keys were specified but running in non-strict; iterating over all.')
for k in keys:
_fnargs = copy.deepcopy(fnargs)
_fnargs['verify'] = [k]
_logger.info('Verifying against key {0}'.format(k.fpr))
try:
_logger.debug(('Verifying with args (data-stripped): '
'{0}').format({k: (v if k not in ('signed_data',
'signature')
else '(stripped)') for k, v in _fnargs.items()}))
sigchk = self.ctx.verify(**_fnargs)
_logger.info('Key {0} verification results: {1}'.format(k.fpr, sigchk))
results[k.fpr] = (True, sigchk[1], None)
except gpg.errors.MissingSignatures as e:
_logger.warning('Key {0}: missing signature'.format(k.fpr))
_logger.debug('Key {0} results: {1}'.format(k.fpr, e.results))
results[k.fpr] = (False, e.results, 'Missing Signature')
except gpg.errors.BadSignatures as e:
_logger.warning('Key {0}: bad signature'.format(k.fpr))
_logger.debug('Key {0} results: {1}'.format(k.fpr, e.results))
results[k.fpr] = (False, e.results, 'Bad Signature')
else:
_logger.debug('No keys specified but running in non-strict; accepting any signatures.')
_logger.debug(('Verifying with args (data-stripped): '
'{0}').format({k: (v if k not in ('signed_data',
'signature')
else '(stripped)') for k, v in fnargs.items()}))
results[None] = self.ctx.verify(**fnargs)
_logger.debug('Results for any/all signatures: {0}'.format(results[None]))
return(results)

def verifyFile(self, filepath, *args, **kwargs):
orig_filepath = filepath
filepath = os.path.abspath(os.path.expanduser(filepath))
_logger.debug('File verification invoked. Transformed filepath: {0} => {1}'.format(orig_filepath, filepath))
if args:
_logger.debug('args: {0}'.format(','.join(args)))
if kwargs:
_logger.debug('kwargs: {0}'.format(kwargs))
with open(filepath, 'rb') as fh:
results = self.verifyData(fh.read(), **kwargs)
return(results)

74
aif/utils/hash_handler.py Normal file
View File

@ -0,0 +1,74 @@
import copy
import hashlib
import os
import pathlib
import zlib
##
import aif.constants_fallback


class Hash(object):
def __init__(self, hash_algos = None, *args, **kwargs):
self.hashers = None
self.valid_hashtypes = list(aif.constants_fallback.HASH_SUPPORTED_TYPES)
self.hash_algos = hash_algos
self.configure()

def configure(self, *args, **kwargs):
self.hashers = {}
if self.hash_algos:
if not isinstance(self.hash_algos, list):
self.hash_algos = [self.hash_algos]
else:
self.hash_algos = copy.deepcopy(self.valid_hashtypes)
for h in self.hash_algos:
if h not in self.valid_hashtypes:
raise ValueError('Hash algorithm not supported')
if h not in aif.constants_fallback.HASH_EXTRA_SUPPORTED_TYPES:
hasher = hashlib.new(h)
else: # adler32 and crc32
hasher = getattr(zlib, h)
self.hashers[h] = hasher
return()

def hashData(self, data, *args, **kwargs):
results = {}
if not self.hashers or not self.hash_algos:
self.configure()
for hashtype, hasher in self.hashers.items():
if hashtype in aif.constants_fallback.HASH_EXTRA_SUPPORTED_TYPES:
results[hashtype] = hasher(data)
else:
hasher.update(data)
results[hashtype] = hasher.hexdigest()
return(results)

def hashFile(self, file_path, *args, **kwargs):
if not isinstance(file_path, (str, pathlib.Path, pathlib.PurePath)):
raise ValueError('file_path must be a path expression')
file_path = str(file_path)
with open(file_path, 'rb') as fh:
results = self.hashData(fh.read())
return(results)

def verifyData(self, data, checksum, checksum_type, *args, **kwargs):
if isinstance(data, str):
data = data.encode('utf-8')
if not isinstance(checksum, str):
checksum = checksum.decode('utf-8')
if checksum_type not in self.hash_algos:
raise ValueError('Hash algorithm not supported; try reconfiguring')
self.configure()
cksum = self.hashData(data)
cksum_htype = cksum[checksum_type]
if cksum == checksum:
result = True
else:
result = False
return(result)

def verifyFile(self, filepath, checksum, checksum_type, *args, **kwargs):
filepath = os.path.abspath(os.path.expanduser(filepath))
with open(filepath, 'rb') as fh:
result = self.verifyData(fh.read(), checksum, checksum_type, **kwargs)
return(result)

29
aif/utils/parser.py Normal file
View File

@ -0,0 +1,29 @@
import logging
import re


_logger = logging.getLogger('utils:{0}'.format(__name__))


_uri_re = re.compile((r'^(?P<scheme>[\w]+)://'
r'(?:(?P<user>[^:@]+)(?::(?P<password>[^@]+)?)?@)?'
r'(?P<base>[^/:]+)?'
r'(?::(?P<port>[0-9]+))?'
r'(?P<path>/.*)$'),
re.IGNORECASE)


class URI(object):
def __init__(self, uri):
self.orig_uri = uri
r = _uri_re.search(self.orig_uri)
if not r:
raise ValueError('Not a valid URI')
for k, v in dict(zip(list(_uri_re.groupindex.keys()), r.groups())).items():
setattr(self, k, v)
if self.port:
self.port = int(self.port)
for a in ('base', 'scheme'):
v = getattr(self, a)
if v:
setattr(self, a, v.lower())

365
aif/utils/sources.py Normal file
View File

@ -0,0 +1,365 @@
import ftplib
import io
import logging
import pathlib
import re
##
import requests
import requests.auth
from lxml import etree
##
import aif.constants_fallback
from . import gpg_handler
from . import hash_handler
from . import parser


_logger = logging.getLogger(__name__)


class ChecksumFile(object):
_bsd_re = re.compile(r'^(?P<fname>\(.*\))\s+=\s+(?P<cksum>.*)$')

def __init__(self, checksum_xml, filetype):
self.xml = checksum_xml
if self.xml is not None:
_logger.debug('checksum_xml: {0}'.format(etree.tostring(self.xml, with_tail = False).decode('utf-8')))
else:
_logger.error('checksum_xml is required but not specified')
raise ValueError('checksum_xml is required')
self.uri = self.xml.text.strip()
self.filetype = filetype
if filetype:
_logger.debug('URI and filetype: {{{0}}}{1}'.format(self.uri, self.filetype))
else:
_logger.error('filetype is required but not specified')
raise ValueError('filetype is required')
self.hashes = None
downloader = getDLHandler(self.uri) # Recursive objects for the win?
dl = downloader(self.xml)
dl.get()
self.data = dl.data.read()
dl.data.seek(0, 0)
self._convert()

def _convert(self):
if not isinstance(self.data, str):
self.data = self.data.decode('utf-8')
self.data.strip()
self.hashes = {}
if self.filetype not in ('gnu', 'bsd'):
_logger.error('Passed an invalid filetype: {0}'.format(self.filetype))
raise ValueError('filetype attribute must be either "gnu" or "bsd"')
for line in self.data.splitlines():
if self.filetype == 'gnu':
hashtype = None # GNU style splits their hash types into separate files by default.
h, fname = line.split(None, 1)
elif self.filetype == 'bsd':
l = line.split(None, 1)
hashtype = l.pop(0).lower()
r = self._bsd_re.search(l[0])
h = r.group('cksum')
fname = r.group('fname')
if hashtype not in self.hashes:
self.hashes[hashtype] = {}
self.hashes[hashtype][fname] = h
_logger.debug('Generated hash set: {0}'.format(self.hashes))
return(None)


class Downloader(object):
def __init__(self, netresource_xml, *args, **kwargs):
self.xml = netresource_xml
_logger.info('Instantiated class {0}'.format(type(self).__name__))
if netresource_xml is not None:
_logger.debug('netresource_xml: {0}'.format(etree.tostring(self.xml, with_tail = False).decode('utf-8')))
else:
_logger.error('netresource_xml is required but not specified')
raise ValueError('netresource_xml is required')
_logger.debug('args: {0}'.format(','.join(args)))
_logger.debug('kwargs: {0}'.format(kwargs))
self.uri = parser.URI(self.xml.text.strip())
_logger.debug('Parsed URI: {0}'.format(self.uri))
self.user = self.xml.attrib.get('user')
if not self.user and self.uri.user:
self.user = self.uri.user
self.password = self.xml.attrib.get('password')
_logger.debug('Parsed user: {0}'.format(self.user))
_logger.debug('Parsed password: {0}'.format(self.password))
if not self.password and self.uri.password:
self.password = self.uri.password
self.real_uri = ('{0}://'
'{1}'
'{2}'
'{3}').format(self.uri.scheme,
(self.uri.base if self.uri.base else ''),
(':{0}'.format(self.uri.port) if self.uri.port else ''),
self.uri.path)
_logger.debug('Rebuilt URI: {0}'.format(self.real_uri))
self.gpg = None
self.checksum = None
self.data = io.BytesIO()

def get(self):
pass # Dummy method.
return(None)

def parseGpgVerify(self, results):
pass # TODO? Might not need to.
return(None)

def verify(self, verify_xml, *args, **kwargs):
gpg_xml = verify_xml.find('gpg')
if gpg_xml is not None:
_logger.debug('gpg_xml: {0}'.format(etree.tostring(gpg_xml, with_tail = False).decode('utf-8')))
else:
_logger.debug('No <gpg> in verify_xml')
hash_xml = verify_xml.find('hash')
if hash_xml is not None:
_logger.debug('hash_xml: {0}'.format(etree.tostring(hash_xml, with_tail = False).decode('utf-8')))
else:
_logger.debug('No <hash> in verify_xml')
results = {}
if gpg_xml is not None:
results['gpg'] = self.verifyGPG(gpg_xml)
if hash_xml is not None:
results['hash'] = self.verifyHash(hash_xml)
return(results)

def verifyGPG(self, gpg_xml, *args, **kwargs):
results = {}
# We don't allow custom GPG homedirs since this is probably running from a LiveCD/USB/whatever anyways.
# This means we can *always* instantiate the GPG handler from scratch.
self.gpg = gpg_handler.GPG()
_logger.info('Established GPG session.')
_logger.debug('GPG home dir: {0}'.format(self.gpg.home))
_logger.debug('GPG primary key: {0}'.format(self.gpg.primary_key.fpr))
keys_xml = gpg_xml.find('keys')
if keys_xml is not None:
_logger.debug('keys_xml: {0}'.format(etree.tostring(keys_xml, with_tail = False).decode('utf-8')))
else:
_logger.error('No required <keys> in gpg_xml')
raise ValueError('<keys> is required in a GPG verification block')
sigs_xml = gpg_xml.find('sigs')
if sigs_xml is not None:
_logger.debug('sigs_xml: {0}'.format(etree.tostring(sigs_xml, with_tail = False).decode('utf-8')))
else:
_logger.error('No required <sigs> in gpg_xml')
raise ValueError('<sigs> is required in a GPG verification block')
fnargs = {'strict': keys_xml.attrib.get('detect')}
if fnargs['strict']: # We have to manually do this since it's in our parent's __init__
if fnargs['strict'].lower() in ('true', '1'):
fnargs['strict'] = True
else:
fnargs['strict'] = False
else:
fnargs['strict'] = False
fnargs.update(kwargs)
if keys_xml is not None:
fnargs['keys'] = []
for key_id_xml in keys_xml.findall('keyID'):
_logger.debug('key_id_xml: {0}'.format(etree.tostring(key_id_xml, with_tail = False).decode('utf-8')))
if key_id_xml.text == 'auto':
_logger.debug('Key ID was set to "auto"; using {0}'.format(aif.constants_fallback.ARCH_RELENG_KEY))
self.gpg.findKeyByID(aif.constants_fallback.ARCH_RELENG_KEY, source = 'remote',
keyring_import = True, **fnargs)
k = self.gpg.findKeyByID(aif.constants_fallback.ARCH_RELENG_KEY, source = 'local', **fnargs)
else:
_logger.debug('Finding key: {0}'.format(key_id_xml.text.strip()))
self.gpg.findKeyByID(key_id_xml.text.strip(), source = 'remote', keyring_import = True, **fnargs)
k = self.gpg.findKeyByID(key_id_xml.text.strip(), source = 'local', **fnargs)
if k:
_logger.debug('Key {0} found'.format(k.fpr))
else:
_logger.error('Key {0} not found'.format(key_id_xml.text.strip()))
raise RuntimeError('Could not find key ID specified')
fnargs['keys'].append(k)
for key_file_xml in keys_xml.findall('keyFile'):
_logger.debug('key_file_xml: {0}'.format(etree.tostring(key_file_xml,
with_tail = False).decode('utf-8')))
downloader = getDLHandler(key_file_xml.text.strip()) # Recursive objects for the win?
dl = downloader(key_file_xml)
dl.get()
k = self.gpg.getKeyData(dl.data.read(), keyring_import = True, **fnargs)[0]
if k:
fnargs['keys'].extend(k)
else:
pass # No keys found in key file. We log this in GPG.getKeyData() though.
dl.data.seek(0, 0)
if not fnargs['keys']:
_logger.debug('Found no keys in keys_xml')
raise RuntimeError('Could not find any keys')
if sigs_xml is not None:
for sig_text_xml in sigs_xml.findall('signature'):
_logger.debug('Found <signature>')
sig = sig_text_xml.text.strip()
sigchk = self.gpg.verifyData(self.data.read(), detached = sig, **fnargs)
self.data.seek(0, 0)
results.update(sigchk)
for sig_file_xml in sigs_xml.findall('signatureFile'):
_logger.debug('Found <signatureFile>: {0}'.format(sig_file_xml.text.strip()))
downloader = getDLHandler(sig_file_xml.text.strip())
dl = downloader(sig_file_xml)
dl.get()
sigchk = self.gpg.verifyData(self.data.read(), detached = dl.data.read(), **fnargs)
dl.data.seek(0, 0)
self.data.seek(0, 0)
results.update(sigchk)
self.gpg.clean()
_logger.debug('Rendered results: {0}'.format(results))
return(results)

def verifyHash(self, hash_xml, *args, **kwargs):
results = []
algos = [str(ht) for ht in hash_xml.xpath('//checksum/@hashType|//checksumFile/@hashType')]
self.checksum = hash_handler.Hash(hash_algos = algos)
self.checksum.configure()
checksum_xml = hash_xml.findall('checksum')
checksum_file_xml = hash_xml.findall('checksumFile')
checksums = self.checksum.hashData(self.data.read())
self.data.seek(0, 0)
if checksum_file_xml:
for cksum_xml in checksum_file_xml:
_logger.debug('cksum_xml: {0}'.format(etree.tostring(cksum_xml, with_tail = False).decode('utf-8')))
htype = cksum_xml.attrib['hashType'].strip().lower()
ftype = cksum_xml.attrib['fileType'].strip().lower()
fname = cksum_xml.attrib.get('filePath',
pathlib.PurePath(self.uri.path).name)
cksum_file = ChecksumFile(cksum_xml, ftype)
if ftype == 'gnu':
cksum = cksum_file.hashes[None][fname]
elif ftype == 'bsd':
cksum = cksum_file.hashes[htype][fname]
result = (cksum == checksums[htype])
if result:
_logger.debug('Checksum type {0} matches ({1})'.format(htype, cksum))
else:
_logger.warning(('Checksum type {0} mismatch: '
'{1} (data) vs. {2} (specified)').format(htype, checksums[htype], cksum))
results.append(result)
if checksum_xml:
for cksum_xml in checksum_xml:
_logger.debug('cksum_xml: {0}'.format(etree.tostring(cksum_xml, with_tail = False).decode('utf-8')))
# Thankfully, this is a LOT easier.
htype = cksum_xml.attrib['hashType'].strip().lower()
result = (cksum_xml.text.strip().lower() == checksums[htype])
if result:
_logger.debug('Checksum type {0} matches ({1})'.format(htype, checksums[htype]))
else:
_logger.warning(('Checksum type {0} mismatch: '
'{1} (data) vs. {2} (specified)').format(htype,
checksums[htype],
cksum_xml.text.strip().lower()))
results.append(result)
result = all(results)
_logger.debug('Overall result of checksumming: {0}'.format(result))
return(result)


class FSDownloader(Downloader):
def __init__(self, netresource_xml, *args, **kwargs):
super().__init__(netresource_xml, *args, **kwargs)
delattr(self, 'user')
delattr(self, 'password')

def get(self):
self.data.seek(0, 0)
with open(self.uri.path, 'rb') as fh:
self.data.write(fh.read())
self.data.seek(0, 0)
_logger.info('Read in {0} bytes'.format(self.data.getbuffer().nbytes))
return(None)


class FTPDownloader(Downloader):
def __init__(self, netresource_xml, *args, **kwargs):
super().__init__(netresource_xml, *args, **kwargs)
if not self.user:
self.user = ''
if not self.password:
self.password = ''
self.port = (self.uri.port if self.uri.port else 0)
self._conn = None
_logger.debug('User: {0}'.format(self.user))
_logger.debug('Password: {0}'.format(self.password))
_logger.debug('Port: {0}'.format(self.port))

def _connect(self):
self._conn = ftplib.FTP()
self._conn.connect(host = self.uri.base, port = self.port)
self._conn.login(user = self.user, passwd = self.password)
_logger.info('Connected.')
return(None)

def get(self):
self._connect()
self.data.seek(0, 0)
self._conn.retrbinary('RETR {0}'.format(self.uri.path), self.data.write)
self.data.seek(0, 0)
self._close()
_logger.info('Read in {0} bytes'.format(self.data.getbuffer().nbytes))
return(None)

def _close(self):
self._conn.quit()
_logger.info('Closed connection')
return(None)


class FTPSDownloader(FTPDownloader):
def __init__(self, netresource_xml, *args, **kwargs):
super().__init__(netresource_xml, *args, **kwargs)

def _connect(self):
self._conn = ftplib.FTP_TLS()
self._conn.connect(host = self.uri.base, port = self.port)
self._conn.login(user = self.user, passwd = self.password)
self._conn.prot_p()
_logger.info('Connected.')
return(None)


class HTTPDownloader(Downloader):
def __init__(self, netresource_xml, *args, **kwargs):
super().__init__(netresource_xml, *args, **kwargs)
self.auth = self.xml.attrib.get('authType', 'none').lower()
if self.auth == 'none':
_logger.debug('No auth.')
self.auth = None
self.realm = None
self.user = None
self.password = None
else:
if self.auth == 'basic':
self.auth = requests.auth.HTTPBasicAuth(self.user, self.password)
_logger.info('HTTP basic auth configured.')
elif self.auth == 'digest':
self.auth = requests.auth.HTTPDigestAuth(self.user, self.password)
_logger.info('HTTP digest auth configured.')

def get(self):
self.data.seek(0, 0)
req = requests.get(self.real_uri, auth = self.auth)
if not req.ok:
_logger.error('Could not fetch remote resource: {0}'.format(self.real_uri))
raise RuntimeError('Unable to fetch remote resource')
self.data.write(req.content)
self.data.seek(0, 0)
_logger.info('Read in {0} bytes'.format(self.data.getbuffer().nbytes))
return(None)


def getDLHandler(uri):
uri = uri.strip()
if re.search(r'^file://', uri, re.IGNORECASE):
return(FSDownloader)
elif re.search(r'^https?://', uri, re.IGNORECASE):
return(HTTPDownloader)
elif re.search(r'^ftp://', uri, re.IGNORECASE):
return(FTPDownloader)
elif re.search(r'^ftps://', uri, re.IGNORECASE):
return(FTPSDownloader)
else:
_logger.error('Unable to detect which download handler to instantiate.')
raise RuntimeError('Could not detect which download handler to use')
return(None)

View File

@ -1,958 +0,0 @@
#!/usr/bin/env python3

## REQUIRES: ##
# parted #
# sgdisk ### (yes, both)
# python 3 with standard library
# (OPTIONAL) lxml
# pacman in the host environment
# arch-install-scripts: https://www.archlinux.org/packages/extra/any/arch-install-scripts/
# a network connection
# the proper kernel arguments.

try:
from lxml import etree
lxml_avail = True
except ImportError:
import xml.etree.ElementTree as etree # https://docs.python.org/3/library/xml.etree.elementtree.html
lxml_avail = False
import datetime
import shlex
import fileinput
import os
import shutil
import re
import socket
import subprocess
import ipaddress
import copy
import urllib.request as urlrequest
import urllib.parse as urlparse
import urllib.response as urlresponse
from ftplib import FTP_TLS
from io import StringIO

logfile = '/root/aif.log.{0}'.format(int(datetime.datetime.utcnow().timestamp()))

class aif(object):
def __init__(self):
pass
def kernelargs(self):
if 'DEBUG' in os.environ.keys():
kernelparamsfile = '/tmp/cmdline'
else:
kernelparamsfile = '/proc/cmdline'
args = {}
args['aif'] = False
# For FTP or HTTP auth
args['aif_user'] = False
args['aif_password'] = False
args['aif_auth'] = False
args['aif_realm'] = False
args['aif_auth'] = 'basic'
with open(kernelparamsfile, 'r') as f:
cmdline = f.read()
for p in shlex.split(cmdline):
if p.startswith('aif'):
param = p.split('=')
if len(param) == 1:
param.append(True)
args[param[0]] = param[1]
if not args['aif']:
exit('You do not have AIF enabled. Exiting.')
args['aif_auth'] = args['aif_auth'].lower()
return(args)
def getConfig(self, args = False):
if not args:
args = self.kernelargs()
# Sanitize the user specification and find which protocol to use
prefix = args['aif_url'].split(':')[0].lower()
# Use the urllib module
if prefix in ('http', 'https', 'file', 'ftp'):
if args['aif_user'] and args['aif_password']:
# Set up Basic or Digest auth.
passman = urlrequest.HTTPPasswordMgrWithDefaultRealm()
if not args['aif_realm']:
passman.add_password(None, args['aif_url'], args['aif_user'], args['aif_password'])
else:
passman.add_password(args['aif_realm'], args['aif_url'], args['aif_user'], args['aif_password'])
if args['aif_auth'] == 'digest':
httpauth = urlrequest.HTTPDigestAuthHandler(passman)
else:
httpauth = urlrequest.HTTPBasicAuthHandler(passman)
httpopener = urlrequest.build_opener(httpauth)
urlrequest.install_opener(httpopener)
with urlrequest.urlopen(args['aif_url']) as f:
conf = f.read()
elif prefix == 'ftps':
if args['aif_user']:
username = args['aif_user']
else:
username = 'anonymous'
if args['aif_password']:
password = args['aif_password']
else:
password = 'anonymous'
filepath = '/'.join(args['aif_url'].split('/')[3:])
server = args['aif_url'].split('/')[2]
content = StringIO()
ftps = FTP_TLS(server)
ftps.login(username, password)
ftps.prot_p()
ftps.retrlines("RETR " + filepath, content.write)
conf = content.getvalue()
else:
exit('{0} is not a recognised URI type specifier. Must be one of http, https, file, ftp, or ftps.'.format(prefix))
return(conf)

def webFetch(self, uri, auth = False):
# Sanitize the user specification and find which protocol to use
prefix = uri.split(':')[0].lower()
# Use the urllib module
if prefix in ('http', 'https', 'file', 'ftp'):
if auth:
if 'user' in auth.keys() and 'password' in auth.keys():
# Set up Basic or Digest auth.
passman = urlrequest.HTTPPasswordMgrWithDefaultRealm()
if not 'realm' in auth.keys():
passman.add_password(None, uri, auth['user'], auth['password'])
else:
passman.add_password(auth['realm'], uri, auth['user'], auth['password'])
if auth['type'] == 'digest':
httpauth = urlrequest.HTTPDigestAuthHandler(passman)
else:
httpauth = urlrequest.HTTPBasicAuthHandler(passman)
httpopener = urlrequest.build_opener(httpauth)
urlrequest.install_opener(httpopener)
with urlrequest.urlopen(uri) as f:
data = f.read()
elif prefix == 'ftps':
if auth:
if 'user' in auth.keys():
username = auth['user']
else:
username = 'anonymous'
if 'password' in auth.keys():
password = auth['password']
else:
password = 'anonymous'
filepath = '/'.join(uri.split('/')[3:])
server = uri.split('/')[2]
content = StringIO()
ftps = FTP_TLS(server)
ftps.login(username, password)
ftps.prot_p()
ftps.retrlines("RETR " + filepath, content.write)
data = content.getvalue()
else:
exit('{0} is not a recognised URI type specifier. Must be one of http, https, file, ftp, or ftps.'.format(prefix))
return(data)

def getXML(self, confobj = False):
if not confobj:
confobj = self.getConfig()
xmlobj = etree.fromstring(confobj)
return(xmlobj)
def buildDict(self, xmlobj = False):
if not xmlobj:
xmlobj = self.getXML()
# Set up the skeleton dicts
aifdict = {}
for i in ('disk', 'mount', 'network', 'system', 'users', 'software', 'scripts'):
aifdict[i] = {}
for i in ('network.ifaces', 'system.bootloader', 'system.services', 'users.root'):
i = i.split('.')
dictname = i[0]
keyname = i[1]
aifdict[dictname][keyname] = {}
aifdict['scripts']['pre'] = False
aifdict['scripts']['post'] = False
aifdict['users']['root']['password'] = False
for i in ('repos', 'mirrors', 'packages'):
aifdict['software'][i] = {}
# Set up the dict elements for disk partitioning
for i in xmlobj.findall('storage/disk'):
disk = i.attrib['device']
fmt = i.attrib['diskfmt'].lower()
if not fmt in ('gpt', 'bios'):
exit('Device {0}\'s format "{1}" is not a valid type (one of gpt, bios).'.format(disk,
fmt))
aifdict['disk'][disk] = {}
aifdict['disk'][disk]['fmt'] = fmt
aifdict['disk'][disk]['parts'] = {}
for x in i:
if x.tag == 'part':
partnum = x.attrib['num']
aifdict['disk'][disk]['parts'][partnum] = {}
for a in x.attrib:
aifdict['disk'][disk]['parts'][partnum][a] = x.attrib[a]
# Set up mountpoint dicts
for i in xmlobj.findall('storage/mount'):
device = i.attrib['source']
mntpt = i.attrib['target']
order = int(i.attrib['order'])
if 'fstype' in i.keys():
fstype = i.attrib['fstype']
else:
fstype = None
if 'opts' in i.keys():
opts = i.attrib['opts']
else:
opts = None
aifdict['mount'][order] = {}
aifdict['mount'][order]['device'] = device
aifdict['mount'][order]['mountpt'] = mntpt
aifdict['mount'][order]['fstype'] = fstype
aifdict['mount'][order]['opts'] = opts
# Set up networking dicts
aifdict['network']['hostname'] = xmlobj.find('network').attrib['hostname']
for i in xmlobj.findall('network/iface'):
# Create a dict for the iface name.
iface = i.attrib['device']
proto = i.attrib['netproto']
address = i.attrib['address']
if 'gateway' in i.attrib.keys():
gateway = i.attrib['gateway']
else:
gateway = False
if 'resolvers' in i.attrib.keys():
resolvers = i.attrib['resolvers']
else:
resolvers = False
if iface not in aifdict['network']['ifaces'].keys():
aifdict['network']['ifaces'][iface] = {}
if proto not in aifdict['network']['ifaces'][iface].keys():
aifdict['network']['ifaces'][iface][proto] = {}
if 'gw' not in aifdict['network']['ifaces'][iface][proto].keys():
aifdict['network']['ifaces'][iface][proto]['gw'] = gateway
aifdict['network']['ifaces'][iface][proto]['addresses'] = []
aifdict['network']['ifaces'][iface][proto]['addresses'].append(address)
aifdict['network']['ifaces'][iface]['resolvers'] = []
if resolvers:
for ip in filter(None, re.split('[,\s]+', resolvers)):
if ip not in aifdict['network']['ifaces'][iface]['resolvers']:
aifdict['network']['ifaces'][iface]['resolvers'].append(ip)
else:
aifdict['network']['ifaces'][iface][proto]['resolvers'] = False
# Set up the users dicts
aifdict['users']['root']['password'] = xmlobj.find('system/users').attrib['rootpass']
for i in xmlobj.findall('system/users'):
for x in i:
username = x.attrib['name']
aifdict['users'][username] = {}
for a in ('uid', 'group', 'gid', 'password', 'comment', 'sudo'):
if a in x.attrib.keys():
aifdict['users'][username][a] = x.attrib[a]
else:
aifdict['users'][username][a] = None
sudo = (x.attrib['sudo']).lower() in ('true', '1')
aifdict['users'][username]['sudo'] = sudo
# And we also need to handle the homedir and xgroup situation
for n in ('home', 'xgroup'):
aifdict['users'][username][n] = False
for a in x:
if not aifdict['users'][username][a.tag]:
aifdict['users'][username][a.tag] = {}
for b in a.attrib:
if a.tag == 'xgroup':
if b == 'name':
groupname = a.attrib[b]
if groupname not in aifdict['users'][username]['xgroup'].keys():
aifdict['users'][username]['xgroup'][a.attrib[b]] = {}
else:
aifdict['users'][username]['xgroup'][a.attrib['name']][b] = a.attrib[b]
else:
aifdict['users'][username][a.tag][b] = a.attrib[b]
# And fill in any missing values. We could probably use the XSD and use of defaults to do this, but... oh well.
if isinstance(aifdict['users'][username]['xgroup'], dict):
for g in aifdict['users'][username]['xgroup'].keys():
for k in ('create', 'gid'):
if k not in aifdict['users'][username]['xgroup'][g].keys():
aifdict['users'][username]['xgroup'][g][k] = False
elif k == 'create':
aifdict['users'][username]['xgroup'][g][k] = aifdict['users'][username]['xgroup'][g][k].lower() in ('true', '1')
if isinstance(aifdict['users'][username]['home'], dict):
for k in ('path', 'create'):
if k not in aifdict['users'][username]['home'].keys():
aifdict['users'][username]['home'][k] = False
elif k == 'create':
aifdict['users'][username]['home'][k] = aifdict['users'][username]['home'][k].lower() in ('true', '1')
# Set up the system settings, if applicable.
aifdict['system']['timezone'] = False
aifdict['system']['locale'] = False
aifdict['system']['kbd'] = False
aifdict['system']['chrootpath'] = False
aifdict['system']['reboot'] = False
for i in ('locale', 'timezone', 'kbd', 'chrootpath', 'reboot'):
if i in xmlobj.find('system').attrib:
aifdict['system'][i] = xmlobj.find('system').attrib[i]
aifdict['system']['reboot'] = aifdict['system']['reboot'].lower() in ('true', '1')
# And now services...
if xmlobj.find('system/service') is None:
aifdict['system']['services'] = False
else:
for x in xmlobj.findall('system/service'):
svcname = x.attrib['name']
state = x.attrib['status'].lower() in ('true', '1')
aifdict['system']['services'][svcname] = {}
aifdict['system']['services'][svcname]['status'] = state
# And software. First the mirror list.
if xmlobj.find('pacman/mirrorlist') is None:
aifdict['software']['mirrors'] = False
else:
aifdict['software']['mirrors'] = []
for x in xmlobj.findall('pacman/mirrorlist'):
for i in x:
aifdict['software']['mirrors'].append(i.text)
# Then the command
if 'command' in xmlobj.find('pacman').attrib:
aifdict['software']['command'] = xmlobj.find('pacman').attrib['command']
else:
aifdict['software']['command'] = False
# And then the repo list.
for x in xmlobj.findall('pacman/repos/repo'):
repo = x.attrib['name']
aifdict['software']['repos'][repo] = {}
aifdict['software']['repos'][repo]['enabled'] = x.attrib['enabled'].lower() in ('true', '1')
aifdict['software']['repos'][repo]['siglevel'] = x.attrib['siglevel']
aifdict['software']['repos'][repo]['mirror'] = x.attrib['mirror']
# And packages.
if xmlobj.find('pacman/software') is None:
aifdict['software']['packages'] = False
else:
aifdict['software']['packages'] = {}
for x in xmlobj.findall('pacman/software/package'):
aifdict['software']['packages'][x.attrib['name']] = {}
if 'repo' in x.attrib:
aifdict['software']['packages'][x.attrib['name']]['repo'] = x.attrib['repo']
else:
aifdict['software']['packages'][x.attrib['name']]['repo'] = None
# The bootloader setup...
for x in xmlobj.find('bootloader').attrib:
aifdict['system']['bootloader'][x] = xmlobj.find('bootloader').attrib[x]
# The script setup...
if xmlobj.find('scripts') is not None:
aifdict['scripts']['pre'] = []
aifdict['scripts']['post'] = []
aifdict['scripts']['pkg'] = []
tempscriptdict = {'pre': {}, 'post': {}, 'pkg': {}}
for x in xmlobj.find('scripts'):
if all(keyname in list(x.attrib.keys()) for keyname in ('user', 'password')):
auth = {}
auth['user'] = x.attrib['user']
auth['password'] = x.attrib['password']
if 'realm' in x.attrib.keys():
auth['realm'] = x.attrib['realm']
if 'authtype' in x.attrib.keys():
auth['type'] = x.attrib['authtype']
scriptcontents = self.webFetch(x.attrib['uri'], auth).decode('utf-8')
else:
scriptcontents = self.webFetch(x.attrib['uri']).decode('utf-8')
tempscriptdict[x.attrib['execution']][x.attrib['order']] = scriptcontents
for d in ('pre', 'post', 'pkg'):
keylst = list(tempscriptdict[d].keys())
keylst.sort()
for s in keylst:
aifdict['scripts'][d].append(tempscriptdict[d][s])
return(aifdict)

class archInstall(object):
def __init__(self, aifdict):
for k, v in aifdict.items():
setattr(self, k, v)

def format(self):
# NOTE: the following is a dict of fstype codes to their description.
fstypes = {'0700': 'Microsoft basic data', '0c01': 'Microsoft reserved', '2700': 'Windows RE', '3000': 'ONIE config', '3900': 'Plan 9', '4100': 'PowerPC PReP boot', '4200': 'Windows LDM data', '4201': 'Windows LDM metadata', '4202': 'Windows Storage Spaces', '7501': 'IBM GPFS', '7f00': 'ChromeOS kernel', '7f01': 'ChromeOS root', '7f02': 'ChromeOS reserved', '8200': 'Linux swap', '8300': 'Linux filesystem', '8301': 'Linux reserved', '8302': 'Linux /home', '8303': 'Linux x86 root (/)', '8304': 'Linux x86-64 root (/', '8305': 'Linux ARM64 root (/)', '8306': 'Linux /srv', '8307': 'Linux ARM32 root (/)', '8400': 'Intel Rapid Start', '8e00': 'Linux LVM', 'a500': 'FreeBSD disklabel', 'a501': 'FreeBSD boot', 'a502': 'FreeBSD swap', 'a503': 'FreeBSD UFS', 'a504': 'FreeBSD ZFS', 'a505': 'FreeBSD Vinum/RAID', 'a580': 'Midnight BSD data', 'a581': 'Midnight BSD boot', 'a582': 'Midnight BSD swap', 'a583': 'Midnight BSD UFS', 'a584': 'Midnight BSD ZFS', 'a585': 'Midnight BSD Vinum', 'a600': 'OpenBSD disklabel', 'a800': 'Apple UFS', 'a901': 'NetBSD swap', 'a902': 'NetBSD FFS', 'a903': 'NetBSD LFS', 'a904': 'NetBSD concatenated', 'a905': 'NetBSD encrypted', 'a906': 'NetBSD RAID', 'ab00': 'Recovery HD', 'af00': 'Apple HFS/HFS+', 'af01': 'Apple RAID', 'af02': 'Apple RAID offline', 'af03': 'Apple label', 'af04': 'AppleTV recovery', 'af05': 'Apple Core Storage', 'bc00': 'Acronis Secure Zone', 'be00': 'Solaris boot', 'bf00': 'Solaris root', 'bf01': 'Solaris /usr & Mac ZFS', 'bf02': 'Solaris swap', 'bf03': 'Solaris backup', 'bf04': 'Solaris /var', 'bf05': 'Solaris /home', 'bf06': 'Solaris alternate sector', 'bf07': 'Solaris Reserved 1', 'bf08': 'Solaris Reserved 2', 'bf09': 'Solaris Reserved 3', 'bf0a': 'Solaris Reserved 4', 'bf0b': 'Solaris Reserved 5', 'c001': 'HP-UX data', 'c002': 'HP-UX service', 'ea00': 'Freedesktop $BOOT', 'eb00': 'Haiku BFS', 'ed00': 'Sony system partition', 'ed01': 'Lenovo system partition', 'ef00': 'EFI System', 'ef01': 'MBR partition scheme', 'ef02': 'BIOS boot partition', 'f800': 'Ceph OSD', 'f801': 'Ceph dm-crypt OSD', 'f802': 'Ceph journal', 'f803': 'Ceph dm-crypt journal', 'f804': 'Ceph disk in creation', 'f805': 'Ceph dm-crypt disk in creation', 'fb00': 'VMWare VMFS', 'fb01': 'VMWare reserved', 'fc00': 'VMWare kcore crash protection', 'fd00': 'Linux RAID'}
# We want to build a mapping of commands to run after partitioning. This will be fleshed out in the future to hopefully include more.
formatting = {}
# TODO: we might want to provide a way to let users specify extra options here.
# TODO: label support?
formatting['ef00'] = ['mkfs.vfat', '-F', '32', '%PART%']
formatting['ef01'] = formatting['ef00']
formatting['ef02'] = formatting['ef00']
formatting['8200'] = ['mkswap', '-c', '%PART%']
formatting['8300'] = ['mkfs.ext4', '-c', '-q', '%PART%'] # some people are DEFINITELY not going to be happy about this. we need to figure out a better way to customize this.
for fs in ('8301', '8302', '8303', '8304', '8305', '8306', '8307'):
formatting[fs] = formatting['8300']
#formatting['8e00'] = FOO # TODO: LVM configuration
#formatting['fd00'] = FOO # TODO: MDADM configuration
cmds = []
for d in self.disk:
partnums = [int(x) for x in self.disk[d]['parts'].keys()]
partnums.sort()
cmds.append(['sgdisk', '-Z', d])
if self.disk[d]['fmt'] == 'gpt':
diskfmt = 'gpt'
if len(partnums) >= 129 or partnums[-1] >= 129:
exit('GPT only supports 128 partitions (and partition allocations).')
cmds.append(['sgdisk', '-og', d])
elif self.disk[d]['fmt'] == 'bios':
diskfmt = 'msdos'
cmds.append(['sgdisk', '-om', d])
cmds.append(['parted', d, '--script', '-a', 'optimal'])
with open(logfile, 'a') as log:
for c in cmds:
subprocess.call(c, stdout = log, stderr = subprocess.STDOUT)
cmds = []
disksize = {}
disksize['start'] = subprocess.check_output(['sgdisk', '-F', d])
disksize['max'] = subprocess.check_output(['sgdisk', '-E', d])
for p in partnums:
# Need to do some mathz to get the actual sectors if we're using percentages.
for s in ('start', 'stop'):
val = self.disk[d]['parts'][str(p)][s]
if '%' in val:
stripped = val.replace('%', '')
modifier = re.sub('[0-9]+%', '', val)
percent = re.sub('(-|\+)*', '', stripped)
decimal = float(percent) / float(100)
newval = int(float(disksize['max']) * decimal)
if s == 'start':
newval = newval + int(disksize['start'])
self.disk[d]['parts'][str(p)][s] = modifier + str(newval)
if self.disk[d]['fmt'] == 'gpt':
for p in partnums:
size = {}
size['start'] = self.disk[d]['parts'][str(p)]['start']
size['end'] = self.disk[d]['parts'][str(p)]['stop']
fstype = self.disk[d]['parts'][str(p)]['fstype'].lower()
if fstype not in fstypes.keys():
print('Filesystem type {0} is not valid. Must be a code from:\nCODE:FILESYSTEM'.format(fstype))
for k, v in fstypes.items():
print(k + ":" + v)
exit()
cmds.append(['sgdisk',
'-n', '{0}:{1}:{2}'.format(str(p),
self.disk[d]['parts'][str(p)]['start'],
self.disk[d]['parts'][str(p)]['stop']),
#'-c', '{0}:"{1}"'.format(str(p), self.disk[d]['parts'][str(p)]['label']), # TODO: add support for partition labels
'-t', '{0}:{1}'.format(str(p), fstype),
d])
mkformat = formatting[fstype]
for x, y in enumerate(mkformat):
if y == '%PART%':
mkformat[x] = d + str(p)
cmds.append(mkformat)
# TODO: add non-gpt stuff here?
with open(logfile, 'a') as log:
for p in cmds:
subprocess.call(p, stdout = log, stderr = subprocess.STDOUT)
usermntidx = list(self.mount.keys())
usermntidx.sort() # We want to make sure we do this in order.
for k in usermntidx:
if self.mount[k]['mountpt'] == 'swap':
subprocess.call(['swapon', self.mount[k]['device']], stdout = log, stderr = subprocess.STDOUT)
else:
os.makedirs(self.mount[k]['mountpt'], exist_ok = True)
os.chown(self.mount[k]['mountpt'], 0, 0)
cmd = ['mount']
if self.mount[k]['fstype']:
cmd.extend(['-t', self.mount[k]['fstype']])
if self.mount[k]['opts']:
cmd.extend(['-o', self.mount[k]['opts']])
cmd.extend([self.mount[k]['device'], self.mount[k]['mountpt']])
subprocess.call(cmd, stdout = log, stderr = subprocess.STDOUT)
return()

def mounts(self):
mntorder = list(self.mount.keys())
mntorder.sort()
for m in mntorder:
mnt = self.mount[m]
if mnt['mountpt'].lower() == 'swap':
cmd = ['swapon', mnt['device']]
else:
cmd = ['mount', mnt['device'], mnt['mountpt']]
if mnt['opts']:
cmd.insert(1, '-o {0}'.format(mnt['opts']))
if mnt['fstype']:
cmd.insert(1, '-t {0}'.format(mnt['fstype']))
# with open(os.devnull, 'w') as DEVNULL:
# for p in cmd:
# subprocess.call(p, stdout = DEVNULL, stderr = subprocess.STDOUT)
# And we need to add some extra mounts to support a chroot. We also need to know what was mounted before.
with open('/proc/mounts', 'r') as f:
procmounts = f.read()
mountlist = {}
for i in procmounts.splitlines():
mountlist[i.split()[1]] = i
cmounts = {}
for m in ('chroot', 'resolv', 'proc', 'sys', 'efi', 'dev', 'pts', 'shm', 'run', 'tmp'):
cmounts[m] = None
chrootdir = self.system['chrootpath']
# chroot (bind mount... onto itself. it's so stupid, i know. see https://bugs.archlinux.org/task/46169)
if chrootdir not in mountlist.keys():
cmounts['chroot'] = ['mount', '--bind', chrootdir, chrootdir]
# resolv.conf (for DNS resolution in the chroot)
if (chrootdir + '/etc/resolv.conf') not in mountlist.keys():
cmounts['resolv'] = ['/bin/mount', '--bind', '-o', 'ro', '/etc/resolv.conf', chrootdir + '/etc/resolv.conf']
# proc
if (chrootdir + '/proc') not in mountlist.keys():
cmounts['proc'] = ['/bin/mount', '-t', 'proc', '-o', 'nosuid,noexec,nodev', 'proc', chrootdir + '/proc']
# sys
if (chrootdir + '/sys') not in mountlist.keys():
cmounts['sys'] = ['/bin/mount', '-t', 'sysfs', '-o', 'nosuid,noexec,nodev,ro', 'sys', chrootdir + '/sys']
# efi (if it exists on the host)
if '/sys/firmware/efi/efivars' in mountlist.keys():
if (chrootdir + '/sys/firmware/efi/efivars') not in mountlist.keys():
cmounts['efi'] = ['/bin/mount', '-t', 'efivarfs', '-o', 'nosuid,noexec,nodev', 'efivarfs', chrootdir + '/sys/firmware/efi/efivars']
# dev
if (chrootdir + '/dev') not in mountlist.keys():
cmounts['dev'] = ['/bin/mount', '-t', 'devtmpfs', '-o', 'mode=0755,nosuid', 'udev', chrootdir + '/dev']
# pts
if (chrootdir + '/dev/pts') not in mountlist.keys():
cmounts['pts'] = ['/bin/mount', '-t', 'devpts', '-o', 'mode=0620,gid=5,nosuid,noexec', 'devpts', chrootdir + '/dev/pts']
# shm (if it exists on the host)
if '/dev/shm' in mountlist.keys():
if (chrootdir + '/dev/shm') not in mountlist.keys():
cmounts['shm'] = ['/bin/mount', '-t', 'tmpfs', '-o', 'mode=1777,nosuid,nodev', 'shm', chrootdir + '/dev/shm']
# run (if it exists on the host)
if '/run' in mountlist.keys():
if (chrootdir + '/run') not in mountlist.keys():
cmounts['run'] = ['/bin/mount', '-t', 'tmpfs', '-o', 'nosuid,nodev,mode=0755', 'run', chrootdir + '/run']
# tmp (if it exists on the host)
if '/tmp' in mountlist.keys():
if (chrootdir + '/tmp') not in mountlist.keys():
cmounts['tmp'] = ['/bin/mount', '-t', 'tmpfs', '-o', 'mode=1777,strictatime,nodev,nosuid', 'tmp', chrootdir + '/tmp']
# Because the order of these mountpoints is so ridiculously important, we hardcode it.
# Yeah, python 3.6 has ordered dicts, but do we really want to risk it?
# Okay. So we finally have all the mounts bound. Whew.
return(cmounts)
def setup(self, mounts = False):
# TODO: could we leverage https://github.com/hartwork/image-bootstrap somehow? I want to keep this close
# to standard Python libs, though, to reduce dependency requirements.
hostscript = []
chrootcmds = []
locales = []
locale = []
if not mounts:
mounts = self.mounts()
# Get the necessary fstab additions for the guest
chrootfstab = subprocess.check_output(['genfstab', '-U', self.system['chrootpath']])
# Set up the time, and then kickstart the guest install.
hostscript.append(['timedatectl', 'set-ntp', 'true'])
# Also start haveged if we have it.
try:
with open(os.devnull, 'w') as devnull:
subprocess.call(['haveged'], stderr = devnull)
except:
pass
# Make sure we get the keys, in case we're running from a minimal live env.
hostscript.append(['pacman-key', '--init'])
hostscript.append(['pacman-key', '--populate'])
hostscript.append(['pacstrap', self.system['chrootpath'], 'base'])
# Run the basic host prep
#with open(os.devnull, 'w') as DEVNULL:
with open(logfile, 'a') as log:
for c in hostscript:
subprocess.call(c, stdout = log, stderr = subprocess.STDOUT)
with open('{0}/etc/fstab'.format(self.system['chrootpath']), 'a') as f:
f.write('# Generated by AIF-NG.\n')
f.write(chrootfstab.decode('utf-8'))
with open(logfile, 'a') as log:
for m in ('resolv', 'proc', 'sys', 'efi', 'dev', 'pts', 'shm', 'run', 'tmp'):
if mounts[m]:
subprocess.call(mounts[m], stdout = log, stderr = subprocess.STDOUT)

# Validating this would be better with pytz, but it's not stdlib. dateutil would also work, but same problem.
# https://stackoverflow.com/questions/15453917/get-all-available-timezones
tzlist = subprocess.check_output(['timedatectl', 'list-timezones']).decode('utf-8').splitlines()
if self.system['timezone'] not in tzlist:
print('WARNING (non-fatal): {0} does not seem to be a valid timezone, but we\'re continuing anyways.'.format(self.system['timezone']))
tzfile = '{0}/etc/localtime'.format(self.system['chrootpath'])
if os.path.lexists(tzfile):
os.remove(tzfile)
os.symlink('/usr/share/zoneinfo/{0}'.format(self.system['timezone']), tzfile)
# This is an ugly hack. TODO: find a better way of determining if the host is set to UTC in the RTC. maybe the datetime module can do it.
utccheck = subprocess.check_output(['timedatectl', 'status']).decode('utf-8').splitlines()
utccheck = [x.strip(' ') for x in utccheck]
for i, v in enumerate(utccheck):
if v.startswith('RTC in local'):
utcstatus = (v.split(': ')[1]).lower() in ('yes')
break
if utcstatus:
chrootcmds.append(['hwclock', '--systohc'])
# We need to check the locale, and set up locale.gen.
with open('{0}/etc/locale.gen'.format(self.system['chrootpath']), 'r') as f:
localeraw = f.readlines()
for line in localeraw:
if not line.startswith('# '): # Comments, thankfully, have a space between the leading octothorpe and the comment. Locales have no space.
i = line.strip().strip('#')
if i != '': # We also don't want blank entries. Keep it clean, folks.
locales.append(i)
for i in locales:
localelst = i.split()
if localelst[0].lower().startswith(self.system['locale'].lower()):
locale.append(' '.join(localelst).strip())
for i, v in enumerate(localeraw):
for x in locale:
if v.startswith('#{0}'.format(x)):
localeraw[i] = x + '\n'
with open('{0}/etc/locale.gen'.format(self.system['chrootpath']), 'w') as f:
f.write('# Modified by AIF-NG.\n')
f.write(''.join(localeraw))
with open('{0}/etc/locale.conf'.format(self.system['chrootpath']), 'a') as f:
f.write('# Added by AIF-NG.\n')
f.write('LANG={0}\n'.format(locale[0].split()[0]))
chrootcmds.append(['locale-gen'])
# Set up the kbd layout.
# Currently there is NO validation on this. TODO.
if self.system['kbd']:
with open('{0}/etc/vconsole.conf'.format(self.system['chrootpath']), 'a') as f:
f.write('# Generated by AIF-NG.\nKEYMAP={0}\n'.format(self.system['kbd']))
# Set up the hostname.
with open('{0}/etc/hostname'.format(self.system['chrootpath']), 'w') as f:
f.write('# Generated by AIF-NG.\n')
f.write(self.network['hostname'] + '\n')
with open('{0}/etc/hosts'.format(self.system['chrootpath']), 'a') as f:
f.write('# Added by AIF-NG.\n127.0.0.1\t{0}\t{1}\n'.format(self.network['hostname'],
(self.network['hostname']).split('.')[0]))
# Set up networking.
ifaces = []
# Ideally we'd find a better way to do... all of this. Patches welcome. TODO.
if 'auto' in self.network['ifaces'].keys():
# Get the default route interface.
for line in subprocess.check_output(['ip', '-oneline', 'route', 'show']).decode('utf-8').splitlines():
line = line.split()
if line[0] == 'default':
autoiface = line[4]
break
ifaces = list(self.network['ifaces'].keys())
ifaces.sort()
if autoiface in ifaces:
ifaces.remove(autoiface)
for iface in ifaces:
resolvers = False
if 'resolvers' in self.network['ifaces'][iface].keys():
resolvers = self.network['ifaces'][iface]['resolvers']
if iface == 'auto':
ifacedev = autoiface
iftype = 'dhcp'
else:
ifacedev = iface
iftype = 'static'
netprofile = 'Description=\'A basic {0} ethernet connection ({1})\'\nInterface={1}\nConnection=ethernet\n'.format(iftype, ifacedev)
if 'ipv4' in self.network['ifaces'][iface].keys():
if self.network['ifaces'][iface]['ipv4']:
netprofile += 'IP={0}\n'.format(iftype)
if 'ipv6' in self.network['ifaces'][iface].keys():
if self.network['ifaces'][iface]['ipv6']:
netprofile += 'IP6={0}\n'.format(iftype) # TODO: change this to stateless if iftype='dhcp' instead?
for proto in ('ipv4', 'ipv6'):
addrs = []
if proto in self.network['ifaces'][iface].keys():
if proto == 'ipv4':
addr = 'Address'
gwstring = 'Gateway'
elif proto == 'ipv6':
addr = 'Address6'
gwstring = 'Gateway6'
gw = self.network['ifaces'][iface][proto]['gw']
for ip in self.network['ifaces'][iface][proto]['addresses']:
if ip == 'auto':
continue
else:
try:
ipver = ipaddress.ip_network(ip, strict = False)
addrs.append(ip)
except ValueError:
exit('{0} was specified but is NOT a valid IPv4/IPv6 address!'.format(ip))
if iftype == 'static':
# Static addresses
netprofile += '{0}=(\'{1}\')\n'.format(addr, ('\' \'').join(addrs))
# Gateway
if gw:
netprofile += '{0}={1}\n'.format(gwstring, gw)
# DNS resolvers
if resolvers:
netprofile += 'DNS=(\'{0}\')\n'.format('\' \''.join(resolvers))
filename = '{0}/etc/netctl/{1}'.format(self.system['chrootpath'], ifacedev)
sysdfile = '{0}/etc/systemd/system/netctl@{1}.service'.format(self.system['chrootpath'], ifacedev)
# The good news is since it's a clean install, we only have to account for our own data, not pre-existing.
with open(filename, 'w') as f:
f.write('# Generated by AIF-NG.\n')
f.write(netprofile)
with open(sysdfile, 'w') as f:
f.write('# Generated by AIF-NG.\n')
f.write(('.include /usr/lib/systemd/system/netctl@.service\n\n[Unit]\n' +
'Description=A basic {0} ethernet connection\n' +
'BindsTo=sys-subsystem-net-devices-{1}.device\n' +
'After=sys-subsystem-net-devices-{1}.device\n').format(iftype, ifacedev))
os.symlink('/etc/systemd/system/netctl@{0}.service'.format(ifacedev),
'{0}/etc/systemd/system/multi-user.target.wants/netctl@{1}.service'.format(self.system['chrootpath'], ifacedev))
os.symlink('/usr/lib/systemd/system/netctl.service',
'{0}/etc/systemd/system/multi-user.target.wants/netctl.service'.format(self.system['chrootpath']))
# Root password
if self.users['root']['password']:
roothash = self.users['root']['password']
else:
roothash = '!'
with fileinput.input('{0}/etc/shadow'.format(self.system['chrootpath']), inplace = True) as f:
for line in f:
linelst = line.split(':')
if linelst[0] == 'root':
linelst[1] = roothash
print(':'.join(linelst), end = '')
# Add users
for user in self.users.keys():
# We already handled root user
if user != 'root':
cmd = ['useradd']
if self.users[user]['home']['create']:
cmd.append('-m')
if self.users[user]['home']['path']:
cmd.append('-d {0}'.format(self.users[user]['home']['path']))
if self.users[user]['comment']:
cmd.append('-c "{0}"'.format(self.users[user]['comment']))
if self.users[user]['gid']:
cmd.append('-g {0}'.format(self.users[user]['gid']))
if self.users[user]['uid']:
cmd.append('-u {0}'.format(self.users[user]['uid']))
if self.users[user]['password']:
cmd.append('-p "{0}"'.format(self.users[user]['password']))
cmd.append(user)
chrootcmds.append(cmd)
# Add groups
if self.users[user]['xgroup']:
for group in self.users[user]['xgroup'].keys():
gcmd = False
if self.users[user]['xgroup'][group]['create']:
gcmd = ['groupadd']
if self.users[user]['xgroup'][group]['gid']:
gcmd.append('-g {0}'.format(self.users[user]['xgroup'][group]['gid']))
gcmd.append(group)
chrootcmds.append(gcmd)
chrootcmds.append(['usermod', '-aG', '{0}'.format(','.join(self.users[user]['xgroup'].keys())), user])
# Handle sudo
if self.users[user]['sudo']:
os.makedirs('{0}/etc/sudoers.d'.format(self.system['chrootpath']), exist_ok = True)
os.chmod('{0}/etc/sudoers.d'.format(self.system['chrootpath']), 0o750)
with open('{0}/etc/sudoers.d/{1}'.format(self.system['chrootpath'], user), 'w') as f:
f.write('# Generated by AIF-NG.\nDefaults:{0} !lecture\n{0} ALL=(ALL) ALL\n'.format(user))
# Base configuration- initcpio, etc.
chrootcmds.append(['mkinitcpio', '-p', 'linux'])
return(chrootcmds)
def bootloader(self):
# Bootloader configuration
btldr = self.system['bootloader']['type']
bootcmds = []
chrootpath = self.system['chrootpath']
bttarget = self.system['bootloader']['target']
if btldr == 'grub':
bootcmds.append(['pacman', '--needed', '--noconfirm', '-S', 'grub', 'efibootmgr'])
bootcmds.append(['grub-install'])
if self.system['bootloader']['efi']:
bootcmds[1].extend(['--target=x86_64-efi', '--efi-directory={0}'.format(bttarget), '--bootloader-id=Arch'])
else:
bootcmds[1].extend(['--target=i386-pc', bttarget])
bootcmds.append(['grub-mkconfig', '-o', '{0}/grub/grub.cfg'.format(bttarget)])
elif btldr == 'systemd':
if self.system['bootloader']['target'] != '/boot':
shutil.copy2('{0}/boot/vmlinuz-linux'.format(chrootpath),
'{0}/{1}/vmlinuz-linux'.format(chrootpath, bttarget))
shutil.copy2('{0}/boot/initramfs-linux.img'.format(chrootpath),
'{0}/{1}/initramfs-linux.img'.format(chrootpath, bttarget))
with open('{0}/{1}/loader/loader.conf'.format(chrootpath, bttarget), 'w') as f:
f.write('# Generated by AIF-NG.\ndefault arch\ntimeout 4\neditor 0\n')
# Gorram, I wish there was a better way to get the partition UUID in stdlib.
majmindev = os.lstat('{0}/{1}'.format(chrootpath, bttarget)).st_dev
majdev = os.major(majmindev)
mindev = os.minor(majmindev)
btdev = os.path.basename(os.readlink('/sys/dev/block/{0}:{1}'.format(majdev, mindev)))
partuuid = False
for d in os.listdir('/dev/disk/by-uuid'):
linktarget = os.path.basename(os.readlink(d))
if linktarget == btdev:
partuuid = linktarget
break
if not partuuid:
exit('ERROR: Cannot determine PARTUUID for /dev/{0}.'.format(btdev))
with open('{0}/{1}/loader/entries/arch.conf'.format(chrootpath, bttarget)) as f:
f.write(('# Generated by AIF-NG.\ntitle\t\tArch Linux\nlinux /vmlinuz-linux\n') +
('initrd /initramfs-linux.img\noptions root=PARTUUID={0} rw\n').format(partuuid))
bootcmds.append(['bootctl', '--path={0}', 'install'])
# TODO: Add a bit here to alter EFI boot order so we boot right to the newly-installed env.
# should probably be optional.
return(bootcmds)

def scriptcmds(self, scripttype):
t = scripttype
if t in self.scripts.keys():
for i, s in enumerate(self.scripts[t]):
dirpath = '/root/scripts/{0}'.format(t)
os.makedirs(dirpath, exist_ok = True)
filepath = '{0}/{1}'.format(dirpath, i)
with open(filepath, 'w') as f:
f.write(s)
os.chmod(filepath, 0o700)
os.chown(filepath, 0, 0) # shouldn't be necessary, but just in case the umask's messed up or something.
if t in ('pre', 'pkg'):
# We want to run these right away.
with open(logfile, 'a') as log:
for i, s in enumerate(self.scripts[t]):
subprocess.call('/root/scripts/{0}/{1}'.format(t, i),
stdout = log,
stderr = subprocess.STDOUT)
return()

def pacmanSetup(self):
# This should be run outside the chroot.
conf = '{0}/etc/pacman.conf'.format(self.system['chrootpath'])
with open(conf, 'r') as f:
confdata = f.readlines()
# This... is not 100% sane, and we need to change it if the pacman.conf upstream changes order of the default repos.
# Here be dragons; you have been warned. TODO.
idx = confdata.index('#[testing]\n')
shutil.copy2(conf, '{0}.arch'.format(conf))
newconf = confdata[:idx]
newconf.append('# Modified by AIF-NG.\n')
for r in self.software['repos']:
if self.software['repos'][r]['mirror'].startswith('file://'):
mirror = 'Include = {0}'.format(re.sub('^file://', '', self.software['repos'][r]['mirror']))
else:
mirror = 'Server = {0}'.format(self.software['repos'][r]['mirror'])
newentry = ['[{0}]\n'.format(r), '{0}\n'.format(mirror)]
if self.software['repos'][r]['siglevel'] != 'default':
newentry.append('Siglevel = {0}\n'.format(self.software['repos'][r]['siglevel']))
if self.software['repos'][r]['enabled']:
pass # I know, shame on me. We want this because we explicitly want it to be set as True
else:
newentry = ["#" + i for i in newentry]
newentry.append('\n')
newconf.extend(newentry)
with open(conf, 'w') as f:
f.write(''.join(newconf))
if self.software['mirrors']:
mirrorlst = '{0}/etc/pacman.d/mirrorlist'.format(self.system['chrootpath'])
shutil.copy2(mirrorlst, '{0}.arch'.format(mirrorlst))
# TODO: file vs. server?
with open(mirrorlst, 'w') as f:
for m in self.software['mirrors']:
if m.startswith('file://'):
mirror = 'Include = {0}'.format(re.sub('^file://', '', m))
else:
mirror = 'Server = {0}'.format(m)
f.write('{0}\n'.format(mirror))
return()

def packagecmds(self):
pkgcmds = []
# This should be run in the chroot, unless we find a way to pacstrap
# packages separate from chrooting
if self.software['command']:
pkgr = shlex.split(self.software['command'])
else:
pkgr = ['pacman', '--needed', '--noconfirm', '-S']
if self.software['packages']:
for p in self.software['packages'].keys():
if self.software['packages'][p]['repo']:
pkgname = '{0}/{1}'.format(self.software['packages'][p]['repo'], p)
else:
pkgname = p
pkgr.append(pkgname)
pkgcmds.append(pkgr)
return(pkgcmds)

def serviceSetup(self):
# this runs inside the chroot
for s in self.system['services'].keys():
if not re.match('\.(service|socket|target|timer)$', s): # i don't bother with .path, .busname, etc.- i might in the future? TODO.
svcname = '{0}.service'.format(s)
service = '/usr/lib/systemd/system/{0}'.format(svcname)
sysdunit = '/etc/systemd/system/multi-user.target.wants/{0}'.format(svcname)
if self.system['services'][s]:
if not os.path.lexists(sysdunit):
os.symlink(service, sysdunit)
else:
if os.path.lexists(sysdunit):
os.remove(sysdunit)
return()

def chroot(self, chrootcmds = False, bootcmds = False, scriptcmds = False, pkgcmds = False):
if not chrootcmds:
chrootcmds = self.setup()
if not bootcmds:
bootcmds = self.bootloader()
if not scriptcmds:
scripts = self.scripts
if not pkgcmds:
pkgcmds = self.packagecmds()
# Switch in the log, and link.
os.rename(logfile, '{0}/{1}'.format(self.system['chrootpath'], logfile))
os.symlink('{0}/{1}'.format(self.system['chrootpath'], logfile), logfile)
self.pacmanSetup() # This needs to be done before the chroot
# We don't need this currently, but we might down the road.
#chrootscript = '#!/bin/bash\n# https://aif.square-r00t.net/\n\n'
#with open('{0}/root/aif.sh'.format(self.system['chrootpath']), 'w') as f:
# f.write(chrootscript)
#os.chmod('{0}/root/aif.sh'.format(self.system['chrootpath']), 0o700)
real_root = os.open("/", os.O_RDONLY)
os.chroot(self.system['chrootpath'])
# Does this even work with an os.chroot()? Let's hope so!
with open(logfile, 'a') as log:
for c in chrootcmds:
subprocess.call(c, stdout = log, stderr = subprocess.STDOUT)
if scripts['pkg']:
self.scriptcmds('pkg')
for i, s in enumerate(scripts['pkg']):
subprocess.call('/root/scripts/pkg/{0}'.format(i),
stdout = log,
stderr = subprocess.STDOUT)
for p in pkgcmds:
subprocess.call(p, stdout = log, stderr = subprocess.STDOUT)
for b in bootcmds:
subprocess.call(b, stdout = log, stderr = subprocess.STDOUT)
if scripts['post']:
self.scriptcmds('post')
for i, s in enumerate(scripts['post']):
subprocess.call('/root/scripts/post/{0}'.format(i),
stdout = log,
stderr = subprocess.STDOUT)
self.serviceSetup()
#os.system('{0}/root/aif-pre.sh'.format(self.system['chrootpath']))
#os.system('{0}/root/aif-post.sh'.format(self.system['chrootpath']))
os.fchdir(real_root)
os.chroot('.')
os.close(real_root)
if not os.path.isfile('{0}/sbin/init'.format(self.system['chrootpath'])):
os.symlink('../lib/systemd/systemd', '{0}/sbin/init'.format(self.system['chrootpath']))
return()
def unmount(self):
with open(logfile, 'a') as log:
subprocess.call(['umount', '-lR', self.system['chrootpath']], stdout = log, stderr = subprocess.STDOUT)
# We should also remove the (now dead) log symlink.
#Note that this does NOT delete the logfile on the installed system.
os.remove(logfile)
return()
def runInstall(confdict):
install = archInstall(confdict)
install.scriptcmds('pre')
install.format()
install.chroot()
install.unmount()
return()

def main():
if os.getuid() != 0:
exit('This must be run as root.')
conf = aif()
instconf = conf.buildDict()
if 'DEBUG' in os.environ.keys():
import pprint
with open(logfile, 'a') as log:
pprint.pprint(instconf, stream = log)
runInstall(instconf)
if instconf['system']['reboot']:
subprocess.run(['reboot'])

if __name__ == "__main__":
main()

View File

@ -85,6 +85,14 @@ TIP: Your distro's package manager should have most if not all of these availabl


NOTE: Some versions may be higher than actually needed. NOTE: Some versions may be higher than actually needed.


////
Need to revamp. Recommended vs. fallback plus required for both

Recommended:
pygobject-introspection
libblockdev
libnm
////


=== Necessary === Necessary
These are needed for using AIF-NG. These are needed for using AIF-NG.
@ -121,7 +129,6 @@ Configure your bootloader to add the following options as necessary:
^m|aif_auth |(see <<aif_url, below>>) ^m|aif_auth |(see <<aif_url, below>>)
^m|aif_username |(see <<aif_url, below>>) ^m|aif_username |(see <<aif_url, below>>)
^m|aif_password |(see <<aif_url, below>>) ^m|aif_password |(see <<aif_url, below>>)
^m|aif_realm |(see <<aif_url, below>>)
|====================== |======================


[[aif_url]] [[aif_url]]
@ -135,7 +142,6 @@ Configure your bootloader to add the following options as necessary:
* If `aif_url` is an HTTP/HTTPS URL, then `aif_user` is the username to use with the https://en.wikipedia.org/wiki/List_of_HTTP_status_codes#4xx_Client_errors[401^] (https://tools.ietf.org/html/rfc7235[RFC 7235^]) auth (via `aif_auth`). * If `aif_url` is an HTTP/HTTPS URL, then `aif_user` is the username to use with the https://en.wikipedia.org/wiki/List_of_HTTP_status_codes#4xx_Client_errors[401^] (https://tools.ietf.org/html/rfc7235[RFC 7235^]) auth (via `aif_auth`).
** If `aif_url` is an FTP/FTPS URI, then `aif_user` will be the FTP user. ** If `aif_url` is an FTP/FTPS URI, then `aif_user` will be the FTP user.
** The same behavior applies for `aif_password`. ** The same behavior applies for `aif_password`.
* If `aif_auth` is `digest`, this is the realm we would use (we attempt to "guess" if it isnt specified); otherwise it is ignored.


== Building a compatible LiveCD == Building a compatible LiveCD
The default Arch install CD does not have AIF installed (hopefully, this will change someday). You have two options for using AIF-NG. The default Arch install CD does not have AIF installed (hopefully, this will change someday). You have two options for using AIF-NG.
@ -199,13 +205,13 @@ The `/aif` element is the https://en.wikipedia.org/wiki/Root_element[root elemen
The `/aif/storage` element contains <<code_disk_code, disk>>, <<code_part_code, disk/part>>, and <<code_mount_code, mount>> elements. The `/aif/storage` element contains <<code_disk_code, disk>>, <<code_part_code, disk/part>>, and <<code_mount_code, mount>> elements.


==== `<disk>` ==== `<disk>`
The `/aif/storage/disk` element holds information about disks on the system, and within this element are one (or more) <<code_part_code, part>> elements. The `/aif/storage/disk` element holds information about disks on the system, and within this element are one (or more) <<code_part_code, part>> elements. Note that any `disk` elements specified here will be *entirely reformatted*; operate under the assumption that ANY and ALL pre-existing data on the specified device will be IRREVOCABLY LOST.


[options="header"] [options="header"]
|====================== |======================
^|Attribute ^|Value ^|Attribute ^|Value
^m|device |The disk to format (e.g. `/dev/sda`) ^m|device |The disk to format (e.g. `/dev/sda`)
^m|diskfmt |https://en.wikipedia.org/wiki/GUID_Partition_Table[`gpt`^] or https://en.wikipedia.org/wiki/Master_boot_record[`bios`^] ^m|diskfmt |https://en.wikipedia.org/wiki/GUID_Partition_Table[`gpt`^] or https://en.wikipedia.org/wiki/Master_boot_record[`msdos`^]
|====================== |======================


===== `<part>` ===== `<part>`
@ -223,10 +229,11 @@ The `/aif/storage/disk/part` element holds information on partitioning that it's
[[specialsize]] [[specialsize]]
The `start` and `stop` attributes can be in the form of: The `start` and `stop` attributes can be in the form of:


* A percentage, indicated by a percentage sign (`"10%"`) * A percentage of the total disk size, indicated by a percentage sign (`"10%"`)
* A size, indicated by the abbreviation (`"300K"`, `"30G"`, etc.) * A size, indicated by the abbreviation (`"300KiB"`, `"10GB"`, etc.)
** Accepts *K* (Kilobytes), *M* (Megabytes), *G* (Gigabytes), *T* (Terabytes), or *P* (Petabytes -- I know, I know.) ** Accepts notation in https://en.wikipedia.org/wiki/Binary_prefix[SI or IEC formats^]
** Can also accept modifiers for this form (`"+500G"`, `"-400M"`) * A raw sector size, if no suffix is provided (sector sizes are *typically* 512 bytes but this can vary depending on disk) (`1024`)
* One can also specify modifiers (`"+10%"`, `"-400MB"`, etc.). A positive modifier indicates from the beginning of the *start of the disk* and a negative modifier specifies from the *end of the disk* (the default, if none is specified, is to use the _previously defined partition's end_ as the *start* for the new partition, or to use the _beginning of the usable disk space_ as the *start* if no previous partition is specified, and to *add* the size to the *start* until the *stop* is reached)


[[fstypes]] [[fstypes]]
NOTE: The following is a table for your reference of partition types. Note that it may be out of date, so reference the link above for the most up-to-date table. NOTE: The following is a table for your reference of partition types. Note that it may be out of date, so reference the link above for the most up-to-date table.
@ -528,7 +535,6 @@ The `/aif/scripts/script` elements specify scripts to be run at different stages
^m|authtype |Same behavior as <<starting_an_install, `aif_auth`>> but for fetching this script (see also <<aif_url, further notes>> on this) ^m|authtype |Same behavior as <<starting_an_install, `aif_auth`>> but for fetching this script (see also <<aif_url, further notes>> on this)
^m|user |Same behavior as <<starting_an_install, `aif_user`>> but for fetching this script (see also <<aif_url, further notes>> on this) ^m|user |Same behavior as <<starting_an_install, `aif_user`>> but for fetching this script (see also <<aif_url, further notes>> on this)
^m|password |Same behavior as <<starting_an_install, `aif_password`>> but for fetching this script (see also <<aif_url, further notes>> on this) ^m|password |Same behavior as <<starting_an_install, `aif_password`>> but for fetching this script (see also <<aif_url, further notes>> on this)
^m|realm |Same behavior as <<starting_an_install, `aif_realm`>> but for fetching this script (see also <<aif_url, further notes>> on this)
^m|execution |(see <<script_types, below>>) ^m|execution |(see <<script_types, below>>)
|====================== |======================


@ -540,10 +546,163 @@ There are several script types availabe for `execution`. Currently, these are:
* pkg * pkg
* post * post


*pre* scripts are run (in numerical `order`) before the disks are even formatted. *pkg* scripts are run (in numerical `order`) right before the <<code_package_code, packages>> are installed (this allows you to configure an <<command, alternate packager>> such as https://aur.archlinux.org/packages/apacman/[apacman^]) -- these are run *inside* the chroot of the new install. *post* scripts are run inside the chroot like *pkg*, but are executed very last thing, just before the reboot. *pre* scripts are run (in specified order) before the disks are even formatted. *pkg* scripts are run (in specified order) right before the <<code_package_code, packages>> are installed (this allows you to configure an <<command, alternate packager>> such as https://aur.archlinux.org/packages/apacman/[apacman^]) -- these are run *inside* the chroot of the new install. *post* scripts are run inside the chroot like *pkg*, but are executed very last thing, just before the reboot.


= Further Information = Further Information
Here you will find further info, other resources, and such relating to AIF-NG.
Here you will find further info and other resources relating to AIF-NG.

== FAQ

=== "Eww, why XML?"
Because it's the superior format for this:

* It supports in-spec validation of data values and data types, formatting of data levels, required data objects and at certain occurrence levels, etc. (unlike JSON, YAML, INI, etc.). Both in and out of channel.
** This means it's MUCH easier for code/language/project/etc.-agnostic software to create, generate, and validate a configuration profile.
* It supports inclusion via XInclude, letting you standardize your configuration snippets across multiple configuration profiles (unlike JSON, YAML, INI, etc.).
* It supports sane nesting (unlike INI).
* It supports attributes to data objects (unlike JSON, YAML, INI, etc.).
* While certainly not used as extensively as it could be in this particular project, it supports namespacing -- and referential namespacing at that, providing a URI to get more info about a certain namespace. JSON, YAML, INI, etc. all do not.
* It is not whitespace-sensitive to denote significance/levels of objects (unlike YAML and, in some cases, INI). This allows for whitespace compression (commonly referred to as "minifying") while still being able to completely retain whitespace inside data's content.
** And as a result, it requires MUCH less escaping and post-parsing cleanup like e.g. JSON and YAML do.
* and so on.

Trust me. XML is superior, especially when needing to represent something as complex as *an entire OS install*. Sorry not sorry to all the bigmad webdevs and DevOps-y people out there. JSON and YAML actually do suck.

=== "How do I make AIF-NG operate entirely offline?"
This is cooked right in, but takes a little extra work.

1.) First you'll need to locally clone the supporting XSD (XML schemas) that AIF-NG uses to verify the configuration file:

`/var/tmp/aif/xml`
[source,bash]
----
mkdir -p /var/tmp/aif
cd /var/tmp/aif
git clone https://git.square-r00t.net/XML xml
----

The path you clone it to isn't important as long as you're consistent below.

2.) Then edit your AIF-NG configuration file to source this directory for XML verification:

`aif.xml` (before)
[source,xml]
----
xsi:schemaLocation="https://aif-ng.io/ http://schema.xml.r00t2.io/projects/aif.xsd"
----

`aif.xml` (after)
[source,xml]
----
xsi:schemaLocation="https://aif-ng.io/ file:///var/tmp/aif/xml/schema/projects/aif.xsd"
----

The XSD files use includes with relative paths, so the rest of that is automagic.

3.) Use local file:// URIs in the rest of your AIF-NG configuration file.
e.g.:

[source,xml]
----
<tarball>file:///var/tmp/aif/bootstrap.tar.gz</tarball>
----

and

[source,xml]
----
<signatureFile>file:///var/tmp/aif/bootstrap.tar.gz.sig</signatureFile>
----

etc.

Obviously you need to *download* those files to their respective destinations first, however.

4.) Lastly, ensure you only use local pacman mirrors in your config. This gets tricky because the chroot will not have a way to access the hosts filesystem without creating e.g. a bind mount beforehand.

As long as:

* No remote locations are specified in your AIF-NG configuration file...
* *and it is completely and well defined*...
* and your scripts don't make remote calls,

then it shouldn't try to perform any remote operations.

Note that if you specified a GPG verification, you'll need to use a local exported key file for the public key (`keyFile`); if you use a `keyID`, then AIF-NG will try to fetch the key from keyservers.

=== "I specified start sector as 0 for a GPT-labeled disk but it starts at sector 2048 instead. What gives?"
GPT requires 33 sectors for the table at the beginning (and 32 sectors at the end) for the actual table. That plus an extra (usually) 512 bytes at the beginning for something called a https://en.wikipedia.org/wiki/GUID_Partition_Table#Protective_MBR_(LBA_0)[Protective MBR^] (this prevents disk utilities from overwriting the GPT label automatically in case they only recognize "msdos" labels and assume the disk is not formatted yet).

Most disks these days use something called https://en.wikipedia.org/wiki/Advanced_Format[Advanced Format^]. These align their sectors to factors of 8, so sector 34 can't be used - it'd have to be sector 40. Additionally, various other low-level disk interactions (e.g. RAID stripe sizes) require a much larger boundary between partitions. If you're interested in a little more detail, you may find https://metebalci.com/blog/a-quick-tour-of-guid-partition-table-gpt/[this^] interesting (specifically https://metebalci.com/blog/a-quick-tour-of-guid-partition-table-gpt/#gpt-partition-entry-array[this section^], paragraph starting with `You may also ask why the first partition starts from LBA 2048...`).

TL;DR: "It's the safest way to make sure your disk doesn't suffer massive degradation in performance, your RAID doesn't eat partitions, etc." Don't worry, it typically only ends up being about 1MB of "wasted" space surrounding partitions. I've written plaintext documentation larger than 1MB.

=== "Why isn't my last GPT partition extending to the last sector?"
See above.

=== "Why do partitions take `start`/`stop` attributes but LVs take `size`?"
Using `start`/`stop` attributes makes sense for disk partitions because they operate on actual geometry (positions on-disk); that is, this lets you create a "gap" between partitions on the disk which can be helpful if you want to do any modifications to the partition table afterwards (this is also why partitions are processed in the order they're specified).

LVM (LVs, in particular), however, aren't consecutive. There *is* no concept of a "start" and "stop" for an LV; LVM uses chunks called "(physical) extents" rather than sectors, and VGs don't have geometry since they're essentially a pool of blocks. This is also why the modifiers like `-` and `+` aren't allowed for LV sizes - they're position-based.

=== "How can I use a whole disk as an MDADM member?"
TL;DR: https://unix.stackexchange.com/questions/320103/whats-the-difference-between-creating-mdadm-array-using-partitions-or-the-whole[You don't^]. You just don't.

The long-winded answer: it's a terrible idea. I'm not here to criticize how you want to structure your install, but I'm definitely going to try to prevent some dumb mistakes from being made. This is one of them.

It can cause a whole slew of issues:, including but not limited to:

* Inflexible disk replacement. Disk geometry (low-level formatting, etc.) can https://queue.acm.org/detail.cfm?id=864058[vary wildly across vendors and models^]. When you have to replace a disk in your degraded RAID array, you're going to be in for a nasty surprise (loss of performance, incompatible size, etc.) when one vendor aligned their e.g. 1TB disk to 512 blocks and the other to 128 blocks (because there are some dumb vendors out there). If you try to replace a disk in a RAID-1 with mismatched size, even by a couple blocks, you're gonna have a bad time.
* Your motherboard may arbitrarily wipe out the RAID superblocks. http://forum.asrock.com/forum_posts.asp?TID=10174[(source)^] https://news.ycombinator.com/item?id=18541493[source^] https://www.phoronix.com/scan.php?page=news_item&px=Linux-Software-RAID-ASRock[source^]
* It can cause some weird issues with e.g. LVM on top of the array. https://askubuntu.com/questions/860643/raid-array-doesnt-reassemble-after-reboot[source^] https://superuser.com/questions/1492938/mdadm-raid-underlaying-an-lvm-gone-after-reboot[source^]
* You can't put a bootloader or EFI System Partition on the disk.

=== "How do I specify packages from the AUR?"
You'd have to https://wiki.archlinux.org/index.php/Makepkg[build the package(s)^], https://wiki.archlinux.org/index.php/Pacman/Tips_and_tricks#Custom_local_repository[set up a repository^], serve it via e.g. https://www.nginx.com/[nginx^], and add it as a repo (`/aif/pacman/repos/repo`) first. Then you can specify the package as normal as a `/aif/pacman/software/package` item.

=== "Why can't the network settings in <network> be applied during install?"
Simply put, a logical race condition. In order for probably 90+% of AIF-NG deploys to bootstrap, they fetch their XML configuration via a network URI (rather than a file URI). This means it needs a network connection that pre-exists in the *install environment* (LiveCD, LiveUSB, PXE/iPXE, etc.) before it even knows what network configuration you want the *persistent environment* to have.

Granted, this is a moot point if you're using a *`file://`* URI for the XML configuration, but this is not a very flexible means regardless. The installation host itself is outside the scope of AIF-NG.

If you desire the configuration to be applied *during* the install, you can do it yourself in an `/aif/scripts/pre/script` or `/aif/scripts/pkg/script` script. The fetched XML file can be found at `/var/tmp/AIF.xml` in the install environment.

If you wish to SSH into the install environment to check the status/progress of the install, it is recommended that you set up a static lease (if using DHCP) or use SLAAC (if using IPv6) beforehand and configure your install environment beforehand. Remember, AIF-NG only *installs* Arch Linux; it tries very hard to *not* interact with the install environment.

=== "Why isn't enabling/disabling automatic DNS resolvers/routes/addresses working?"
This is going to be highly unpredictable based on the networking provider you choose. This is a limitation of underlying network provider intercompatibility, resolver libraries, there being no way to tell DHCP/DHCP6/SLAAC clients to *only* fetch information about a network and *not* assign a lease, and technology architecture. This may be changed in the future, but because of how DNS servers are handled via DHCP/RDNSS and glibc (and the fact that IPv4 resolver addresses can serve IPv6 -- e.g. AAAA -- records and vice versa) and inherent limitations in some network providers like netctl, I wouldn't hold your breath.

=== "I'm using netctl as my network provider, and-"
I'ma let you finish, but netctl is a *really* simple network provider. I mean REALLY simple. As such, a lot of things don't work at all feasibly, and probably might not ever. It's great for simple and flat configurations (i.e. all static everything, all automatic everything, etc.) and I even use it on my own machines where I can, but it just simply doesn't make allowances for more complex setups. (This is why init scripts were replaced by systemd for init, remember? Script-and-shell-based utilities, such as netctl -- seriously, the entire thing's written in Bash -- just can't handle more complex jobs reliably.)

If you need more advanced functionality but don't want a lot of cruft or bloat, I recommend `networkd` as your network provider. It requires no extra packages (other than wpa_supplicant, if you're using wireless) because it's part of the systemd package (which is part of the most basic install of Arch) and handles more advanced configurations a lot more reliably.

=== "How do I specify WEP for a wireless network?"
You can't. WEP's pretty broken. I understand some legacy networks may still use it, but I'm incredibly uncomfortable supporting it.

If absolutely necessary, you can manually configure it yourself via a `/aif/scripts/post/script` script (or just configure it once you boot the newly-installed system).

==== "Then why do you allow connecting to open wireless networks in the config?"
Because captive portals are a thing. *Authing* to them, however; that's out of my scope.

=== "How do I configure connecting to a WPA2 Enterprise network?"
You can't, currently; support is only stubbed out for now. If absolutely necessary, you can manually configure it yourself via a `/aif/scripts/post/script` script.

This hopefully will be changed in the future, however, as I'm interested in adding support. For now, open and WPA/WPA2 PSK only are considered supported.

=== "How do I use my own GnuPG homedir instead of letting AIF-NG create one automatically?"
I can pretty easily add support for this -- it's stubbed in already. But there are a couple reasons it doesn't really make sense to do so:

* Being that most people are probably using this from a LiveCD/LiveUSB/PXE/whatever, it's *highly* unlikely they'll even have a static GnuPG homedir available.
* Even if they did, AIF-NG has no real way of running a passphrase prompt. It's intended to be run automatically, non-interactively, and daemonized. You'd have to have a passphrase-less private key for it to work.
** Why? Because it needs to be able to sign and trust the key ID you specified to get an accurate validity reading of the signature. If the private key has a passphrase, this is required for the operation to complete. If a custom homedir with a passphrased private key was specified, the signature's signer's public key would already need to be imported into the keyring, signed, AND trusted (with a sufficiently high enough level).

=== "Why do I have to specify a URI or key ID for a GPG key but can include a raw text block for a GPG `signature`?"
Because keys are (generally speaking) intended to be publicly fetchable in some form or another. `signatures` are not (necessarily); they're more geared towards being file objects. I definitely recommend using `signatureFile` instead, though, even if it's just to a local .sig/.asc file.

=== "Why don't you support WKD for GPG key fetching?"
Because I didn't. If there is interest, I can add support for it but please don't request it unless you plan on actually using it.


== Bug Reports/Feature Requests == Bug Reports/Feature Requests
NOTE: It is possible to submit a bug or feature request without registering in my bugtracker. One of my pet peeves is needing to create an account/register on a bugtracker simply to report a bug! The following links only require an email address to file a bug (which is necessary in case I need any further clarification from you or to keep you updated on the status of the bug/feature request -- so please be sure to use a valid email address). NOTE: It is possible to submit a bug or feature request without registering in my bugtracker. One of my pet peeves is needing to create an account/register on a bugtracker simply to report a bug! The following links only require an email address to file a bug (which is necessary in case I need any further clarification from you or to keep you updated on the status of the bug/feature request -- so please be sure to use a valid email address).

3
docs/THANKS Normal file
View File

@ -0,0 +1,3 @@
AIF-NG owes thanks to:

* jthan for being a good rubber ducky

View File

@ -1,9 +1,11 @@
- make disk partitioning/table formatting OPTIONAL (so it can be installed on an already formatted disk)
- support Arch Linux ARM? - support Arch Linux ARM?
- support multiple explicit locales via comma-separated list (see how i handle resolvers) - support multiple explicit locales via comma-separated list (see how i handle resolvers)
- config layout - config layout
-- need to apply defaults and annotate/document -- need to apply defaults and annotate/document
--- is this necessary since i doc with asciidoctor now? --- is this necessary since i doc with asciidoctor now?
- how to support mdadm, lvm? - how to support mdadm, lvm, LUKS FDE?
-- cryptsetup support- use new child type, "cryptPart", under storage/disk/ and new mount attrib, "isLUKS"?
- support serverside "autoconfig"- a mechanism to let servers automatically generate xml build configs. e.g.: - support serverside "autoconfig"- a mechanism to let servers automatically generate xml build configs. e.g.:
kernel ... aif_url="https://build.domain.tld/aif-ng.php" auto=yes kernel ... aif_url="https://build.domain.tld/aif-ng.php" auto=yes
would yield the *client* sending info via URL params (actually, this might be better as a JSON POST, since we already have a way to generate JSON. sort of.), would yield the *client* sending info via URL params (actually, this might be better as a JSON POST, since we already have a way to generate JSON. sort of.),
@ -11,6 +13,7 @@
or something like that. or something like that.
- parser: make sure to use https://mikeknoop.com/lxml-xxe-exploit/ fix - parser: make sure to use https://mikeknoop.com/lxml-xxe-exploit/ fix
- convert use of confobj or whatever to maybe be suitable to use webFetch instead. LOTS of duplicated code there. - convert use of confobj or whatever to maybe be suitable to use webFetch instead. LOTS of duplicated code there.
- support XInclude
- can i install packages the way pacstrap does, without a chroot? i still need to do it, unfortunately, for setting up efibootmgr etc. but..: - can i install packages the way pacstrap does, without a chroot? i still need to do it, unfortunately, for setting up efibootmgr etc. but..:
pacman -r /mnt/aif -Sy base --cachedir=/mnt/aif/var/cache/pacman/pkg --noconfirm pacman -r /mnt/aif -Sy base --cachedir=/mnt/aif/var/cache/pacman/pkg --noconfirm
/dev/sda2 on /mnt/aif type ext4 (rw,relatime,data=ordered) /dev/sda2 on /mnt/aif type ext4 (rw,relatime,data=ordered)
@ -23,15 +26,19 @@
shm on /mnt/aif/dev/shm type tmpfs (rw,nosuid,nodev,relatime) shm on /mnt/aif/dev/shm type tmpfs (rw,nosuid,nodev,relatime)
run on /mnt/aif/run type tmpfs (rw,nosuid,nodev,relatime,mode=755) run on /mnt/aif/run type tmpfs (rw,nosuid,nodev,relatime,mode=755)
tmp on /mnt/aif/tmp type tmpfs (rw,nosuid,nodev) tmp on /mnt/aif/tmp type tmpfs (rw,nosuid,nodev)
OR just use pyalpm


DOCUMENTATION: aif-config.py (and note sample json as well) DOCUMENTATION:
- https://stackoverflow.com/questions/237938/how-to-convert-xsd-to-human-readable-documentation ?
- (*) https://stackoverflow.com/a/6686367


for network configuration, add in support for using a device's MAC address instead of interface name for network configuration, add in support for using a device's MAC address instead of interface name?


also create: also:
-create boot media with bdisk since default arch doesn't even have python 3 -create boot media with bdisk since default arch doesn't even have python 3
-- this is.. sort of? done. but iPXE/mini build is failing, need to investigate why -- this is.. sort of? done. but iPXE/mini build is failing, need to investigate why
-- i tihnk i fixed iPXE but i need to generate another one once 1.5 is released -- i tihnk i fixed iPXE but i need to generate another one once 1.5 is released
-- PENDING BDISK REWRITE
docs: docs:
http://lxml.de/parsing.html http://lxml.de/parsing.html
https://www.w3.org/2001/XMLSchema.xsd https://www.w3.org/2001/XMLSchema.xsd
@ -41,3 +48,15 @@ https://www.w3schools.com/xml/schema_intro.asp
https://www.w3schools.com/xml/schema_example.asp https://www.w3schools.com/xml/schema_example.asp
https://msdn.microsoft.com/en-us/library/dd489258.aspx https://msdn.microsoft.com/en-us/library/dd489258.aspx


if i ever need a list of GPT GUIDs, maybe to do some fancy GUID-to-name-and-back mapping?
https://en.wikipedia.org/wiki/GUID_Partition_Table#Partition_type_GUIDs
(mapping can be done via https://stackoverflow.com/questions/483666/reverse-invert-a-dictionary-mapping)



docs todo:
- syntax notation:
bold element/attribute names are required (only specified once).
regular are optional.
italicized means there can be multiple (none, one or many) specified.
italicized and bold means there must be at LEAST one.

View File

@ -1,183 +0,0 @@
{
"boot": {
"bootloader": "grub",
"efi": true,
"target": "/boot"
},
"disks": {
"/dev/sda": {
"fmt": "gpt",
"parts": {
"1": {
"fstype": "8300",
"start": "0%",
"stop": "95%"
},
"2": {
"fstype": "ef00",
"start": "95%",
"stop": "100%"
}
}
},
"/dev/sdb": {
"fmt": "gpt",
"parts": {
"1": {
"fstype": "8300",
"start": "0%",
"stop": "47%"
},
"2": {
"fstype": "8300",
"start": "47%",
"stop": "95%"
},
"3": {
"fstype": "8200",
"start": "95%",
"stop": "100%"
}
}
}
},
"mounts": {
"1": {
"device": "/dev/sda1",
"fstype": "ext4",
"opts": "defaults",
"target": "/mnt/aif"
},
"2": {
"device": "/dev/sda2",
"fstype": "vfat",
"opts": "rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro",
"target": "/mnt/aif/boot"
},
"3": {
"device": "/dev/sdb1",
"fstype": "ext4",
"opts": "defaults",
"target": "/mnt/aif/home"
},
"4": {
"device": "/dev/sdb2",
"fstype": "ext4",
"opts": "defaults",
"target": "/mnt/aif/mnt/data"
},
"5": {
"device": "/dev/sdb3",
"fstype": false,
"opts": false,
"target": "swap"
}
},
"network": {
"hostname": "aif.loc.lan",
"ifaces": {
"ens3": {
"address": "auto",
"gw": false,
"proto": "ipv4",
"resolvers": false
},
"ens4": {
"address": "192.168.1.2/24",
"gw": "192.168.1.1",
"proto": "ipv4",
"resolvers": [
"4.2.2.1",
"4.2.2.2"
]
}
}
},
"scripts": {
"pkg": false,
"post": {
"1": {
"auth": "digest",
"password": "password",
"realm": "realmname",
"uri": "https://aif.square-r00t.net/sample-scripts/post/first.sh",
"user": "test"
}
},
"pre": false
},
"software": {
"mirrors": [
"http://mirrors.advancedhosters.com/archlinux/$repo/os/$arch",
"http://mirror.us.leaseweb.net/archlinux/$repo/os/$arch",
"http://arch.mirror.constant.com/$repo/os/$arch",
"http://mirror.vtti.vt.edu/archlinux/$repo/os/$arch",
"http://arch.mirrors.pair.com/$repo/os/$arch",
"http://mirror.yellowfiber.net/archlinux/$repo/os/$arch"
],
"packages": {
"openssh": "None"
},
"pkgr": false,
"repos": {
"community": {
"enabled": true,
"mirror": "file:///etc/pacman.d/mirrorlist",
"siglevel": "default"
},
"community-testing": {
"enabled": false,
"mirror": "file:///etc/pacman.d/mirrorlist",
"siglevel": "default"
},
"core": {
"enabled": true,
"mirror": "file:///etc/pacman.d/mirrorlist",
"siglevel": "default"
},
"extra": {
"enabled": true,
"mirror": "file:///etc/pacman.d/mirrorlist",
"siglevel": "default"
},
"multilib": {
"enabled": true,
"mirror": "file:///etc/pacman.d/mirrorlist",
"siglevel": "default"
},
"multilib-testing": {
"enabled": false,
"mirror": "file:///etc/pacman.d/mirrorlist",
"siglevel": "default"
}
}
},
"system": {
"chrootpath": "/mnt/aif",
"kbd": "US",
"locale": "en_US.UTF-8",
"reboot": true,
"rootpass": "$6$aIK0xvxLa/9BTEDu$xFskR0cQcEi273I8dgUtyO7WjjhHUZOfyS6NemelPgfMJORxbjgI6QCW6wEcCh7NVA1qGDpS0Lyg9vDCaRnA9/",
"services": {
"sshd": true
},
"timezone": "UTC",
"users": {
"aifusr": {
"comment": "A Test User",
"gid": false,
"group": false,
"home": false,
"password": "$6$arRyKn/VsusyJNQo$huX4aa1aJPzRMyyqeEw6IxC1KC1EKKJ8RXdQp6W68Yt7SVdHjwU/fEDvPb3xD3lUHOQ6ysLKWLkEXFNYxLpMf1",
"sudo": true,
"uid": false,
"xgroups": {
"users": {
"create": false,
"gid": false
}
}
}
}
}
}

View File

@ -1,96 +0,0 @@
{'boot': {'bootloader': 'grub', 'efi': True, 'target': '/boot'},
'disks': {'/dev/sda': {'fmt': 'gpt',
'parts': {1: {'fstype': '8300',
'start': '0%',
'stop': '95%'},
2: {'fstype': 'ef00',
'start': '95%',
'stop': '100%'}}},
'/dev/sdb': {'fmt': 'gpt',
'parts': {1: {'fstype': '8300',
'start': '0%',
'stop': '47%'},
2: {'fstype': '8300',
'start': '47%',
'stop': '95%'},
3: {'fstype': '8200',
'start': '95%',
'stop': '100%'}}}},
'mounts': {1: {'device': '/dev/sda1',
'fstype': 'ext4',
'opts': 'defaults',
'target': '/mnt/aif'},
2: {'device': '/dev/sda2',
'fstype': 'vfat',
'opts': 'rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro',
'target': '/mnt/aif/boot'},
3: {'device': '/dev/sdb1',
'fstype': 'ext4',
'opts': 'defaults',
'target': '/mnt/aif/home'},
4: {'device': '/dev/sdb2',
'fstype': 'ext4',
'opts': 'defaults',
'target': '/mnt/aif/mnt/data'},
5: {'device': '/dev/sdb3',
'fstype': False,
'opts': False,
'target': 'swap'}},
'network': {'hostname': 'aif.loc.lan',
'ifaces': {'ens3': {'address': 'auto',
'gw': False,
'proto': 'ipv4',
'resolvers': False},
'ens4': {'address': '192.168.1.2/24',
'gw': '192.168.1.1',
'proto': 'ipv4',
'resolvers': ['4.2.2.1', '4.2.2.2']}}},
'scripts': {'pkg': False,
'post': {1: {'auth': 'digest',
'password': 'password',
'realm': 'realmname',
'uri': 'https://aif.square-r00t.net/sample-scripts/post/first.sh',
'user': 'test'}},
'pre': False},
'software': {'mirrors': ['http://mirrors.advancedhosters.com/archlinux/$repo/os/$arch',
'http://mirror.us.leaseweb.net/archlinux/$repo/os/$arch',
'http://arch.mirror.constant.com/$repo/os/$arch',
'http://mirror.vtti.vt.edu/archlinux/$repo/os/$arch',
'http://arch.mirrors.pair.com/$repo/os/$arch',
'http://mirror.yellowfiber.net/archlinux/$repo/os/$arch'],
'packages': {'openssh': None},
'pkgr': False,
'repos': {'community': {'enabled': True,
'mirror': 'file:///etc/pacman.d/mirrorlist',
'siglevel': 'default'},
'community-testing': {'enabled': False,
'mirror': 'file:///etc/pacman.d/mirrorlist',
'siglevel': 'default'},
'core': {'enabled': True,
'mirror': 'file:///etc/pacman.d/mirrorlist',
'siglevel': 'default'},
'extra': {'enabled': True,
'mirror': 'file:///etc/pacman.d/mirrorlist',
'siglevel': 'default'},
'multilib': {'enabled': True,
'mirror': 'file:///etc/pacman.d/mirrorlist',
'siglevel': 'default'},
'multilib-testing': {'enabled': False,
'mirror': 'file:///etc/pacman.d/mirrorlist',
'siglevel': 'default'}}},
'system': {'chrootpath': '/mnt/aif',
'kbd': 'US',
'locale': 'en_US.UTF-8',
'reboot': True,
'rootpass': '$6$aIK0xvxLa/9BTEDu$xFskR0cQcEi273I8dgUtyO7WjjhHUZOfyS6NemelPgfMJORxbjgI6QCW6wEcCh7NVA1qGDpS0Lyg9vDCaRnA9/',
'services': {'sshd': True},
'timezone': 'UTC',
'users': {'aifusr': {'comment': 'A Test User',
'gid': False,
'group': False,
'home': False,
'password': '$6$arRyKn/VsusyJNQo$huX4aa1aJPzRMyyqeEw6IxC1KC1EKKJ8RXdQp6W68Yt7SVdHjwU/fEDvPb3xD3lUHOQ6ysLKWLkEXFNYxLpMf1',
'sudo': True,
'uid': False,
'xgroups': {'users': {'create': False,
'gid': False}}}}}}

View File

@ -1,62 +0,0 @@
<?xml version="1.0" encoding="UTF-8" ?>
<aif xmlns:aif="http://aif.square-r00t.net/"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://aif.square-r00t.net aif.xsd">
<storage>
<disk device="/dev/sda" diskfmt="gpt">
<part num="1" start="0%" stop="10%" fstype="ef00" />
<part num="2" start="10%" stop="100%" fstype="8300" />
</disk>
<mount source="/dev/sda2" target="/mnt/aif" order="1" />
<mount source="/dev/sda1" target="/mnt/aif/boot" order="2" />
</storage>
<network hostname="aiftest.square-r00t.net">
<iface device="auto" address="auto" netproto="ipv4" />
</network>
<system timezone="EST5EDT" locale="en_US.UTF-8" chrootpath="/mnt/aif" reboot="1">
<users rootpass="!" />
<service name="sshd" status="1" />
<service name="cronie" status="1" />
<service name="haveged" status="1" />
</system>
<pacman command="apacman -S">
<repos>
<repo name="core" enabled="true" siglevel="default" mirror="file:///etc/pacman.d/mirrorlist" />
<repo name="extra" enabled="true" siglevel="default" mirror="file:///etc/pacman.d/mirrorlist" />
<repo name="community" enabled="true" siglevel="default" mirror="file:///etc/pacman.d/mirrorlist" />
<repo name="multilib" enabled="true" siglevel="default" mirror="file:///etc/pacman.d/mirrorlist" />
<repo name="testing" enabled="false" siglevel="default" mirror="file:///etc/pacman.d/mirrorlist" />
<repo name="multilib-testing" enabled="false" siglevel="default" mirror="file:///etc/pacman.d/mirrorlist" />
<repo name="archlinuxfr" enabled="false" siglevel="Optional TrustedOnly" mirror="http://repo.archlinux.fr/$arch" />
</repos>
<mirrorlist>
<mirror>http://mirror.us.leaseweb.net/archlinux/$repo/os/$arch</mirror>
<mirror>http://mirrors.advancedhosters.com/archlinux/$repo/os/$arch</mirror>
<mirror>http://ftp.osuosl.org/pub/archlinux/$repo/os/$arch</mirror>
<mirror>http://arch.mirrors.ionfish.org/$repo/os/$arch</mirror>
<mirror>http://mirrors.gigenet.com/archlinux/$repo/os/$arch</mirror>
<mirror>http://mirror.jmu.edu/pub/archlinux/$repo/os/$arch</mirror>
</mirrorlist>
<software>
<package name="sed" repo="core" />
<package name="python" />
<package name="openssh" />
<package name="vim" />
<package name="vim-plugins" />
<package name="haveged" />
<package name="byobu" />
<package name="etc-update" />
<package name="cronie" />
<package name="mlocate" />
<package name="mtree-git" />
</software>
</pacman>
<bootloader type="grub" target="/boot" efi="true" />
<scripts>
<script uri="https://aif.square-r00t.net/cfgs/scripts/pkg/python.sh" order="1" execution="pkg" />
<script uri="https://aif.square-r00t.net/cfgs/scripts/pkg/apacman.py" order="2" execution="pkg" />
<script uri="https://aif.square-r00t.net/cfgs/scripts/post/sshsecure.py" order="1" execution="post" />
<script uri="https://aif.square-r00t.net/cfgs/scripts/post/sshkeys.py" order="2" execution="post" />
<script uri="https://aif.square-r00t.net/cfgs/scripts/post/configs.py" order="3" execution="post" />
</scripts>
</aif>

View File

@ -1,66 +0,0 @@
<?xml version="1.0" encoding="UTF-8" ?>
<aif xmlns:aif="https://aif.square-r00t.net"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="https://aif.square-r00t.net aif.xsd">
<storage>
<disk device="/dev/sda" diskfmt="gpt">
<part num="1" start="0%" stop="10%" fstype="ef00" />
<part num="2" start="10%" stop="80%" fstype="8300" />
<part num="3" start="80%" stop="100%" fstype="8200" />
</disk>
<mount source="/dev/sda2" target="/mnt/aif" order="1" />
<mount source="/dev/sda1" target="/mnt/aif/boot" order="2" />
<mount source="/dev/sda3" target="swap" order="3" />
</storage>
<network hostname="aiftest.square-r00t.net">
<iface device="auto" address="auto" netproto="ipv4" />
</network>
<system timezone="EST5EDT" locale="en_US.UTF-8" chrootpath="/mnt/aif" reboot="0">
<!-- note: all password hashes below are "test"; don't waste your time trying to crack. :) -->
<users rootpass="$6$3YPpiS.l3SQC6ELe$NQ4qMvcDpv5j1cCM6AGNc5Hyg.rsvtzCt2VWlSbuZXCGg2GB21CMUN8TMGS35tdUezZ/n9y3UFGlmLRVWXvZR.">
<user name="aifusr"
sudo="true"
password="$6$WtxZKOyaahvvWQRG$TUys60kQhF0ffBdnDSJVTA.PovwCOajjMz8HEHL2H0ZMi0bFpDTQvKA7BqzM3nA.ZMAUxNjpJP1dG/eA78Zgw0"
comment="A test user for AIF.">
<home path="/opt/aifusr" create="true" />
<xgroup name="admins" create="true" />
<xgroup name="wheel" />
<xgroup name="users" />
</user>
</users>
<service name="sshd" status="0" />
</system>
<pacman>
<repos>
<repo name="core" enabled="true" siglevel="default" mirror="file:///etc/pacman.d/mirrorlist" />
<repo name="extra" enabled="true" siglevel="default" mirror="file:///etc/pacman.d/mirrorlist" />
<repo name="community" enabled="true" siglevel="default" mirror="file:///etc/pacman.d/mirrorlist" />
<repo name="multilib" enabled="true" siglevel="default" mirror="file:///etc/pacman.d/mirrorlist" />
<repo name="testing" enabled="false" siglevel="default" mirror="file:///etc/pacman.d/mirrorlist" />
<repo name="multilib-testing" enabled="false" siglevel="default" mirror="file:///etc/pacman.d/mirrorlist" />
<repo name="archlinuxfr" enabled="false" siglevel="Optional TrustedOnly" mirror="http://repo.archlinux.fr/$arch" />
</repos>
<mirrorlist>
<mirror>http://mirrors.advancedhosters.com/archlinux/$repo/os/$arch</mirror>
<mirror>http://mirrors.advancedhosters.com/archlinux/$repo/os/$arch</mirror>
<mirror>http://mirror.us.leaseweb.net/archlinux/$repo/os/$arch</mirror>
<mirror>http://ftp.osuosl.org/pub/archlinux/$repo/os/$arch</mirror>
<mirror>http://arch.mirrors.ionfish.org/$repo/os/$arch</mirror>
<mirror>http://mirrors.gigenet.com/archlinux/$repo/os/$arch</mirror>
<mirror>http://mirror.jmu.edu/pub/archlinux/$repo/os/$arch</mirror>
</mirrorlist>
<software>
<package name="sed" repo="core" />
<package name="python" />
<package name="perl" />
<package name="openssh" />
</software>
</pacman>
<bootloader type="grub" target="/boot" efi="true" />
<scripts>
<script uri="https://aif.square-r00t.net/sample-scripts/post/first.sh" order="1" execution="post" />
<script uri="https://aif.square-r00t.net/sample-scripts/pre/second.pl" order="2" execution="pre" />
<script uri="https://aif.square-r00t.net/sample-scripts/pre/first.sh" order="1" execution="pre" />
<script uri="https://aif.square-r00t.net/sample-scripts/post/second.py" order="2" execution="post" />
</scripts>
</aif>

View File

@ -0,0 +1,93 @@
#!/usr/bin/env python3

import os
import re
import subprocess
import uuid
##
import requests
from bs4 import BeautifulSoup

# You, the average user, will probably have absolutely no use for this.

types = {'gpt': {'local': [],
'wiki': {}},
'msdos': {'local': [],
'src': []}}

# GPT
cmd = ['/usr/bin/sfdisk', '--list-types', '--label=gpt']
url = 'https://en.wikipedia.org/wiki/GUID_Partition_Table'
# First get the local list.
with open(os.devnull, 'wb') as devnull:
cmd_out = subprocess.run(cmd, stdout = subprocess.PIPE, stderr = devnull)
stdout = [i for i in cmd_out.stdout.decode('utf-8').splitlines() if i not in ('Id Name', '')]
for idx, line in enumerate(stdout):
i = idx + 1
l = line.split()
u = l.pop(0)
desc = ' '.join(l)
types['gpt']['local'].append((i, desc, uuid.UUID(hex = u)))
# Then wikipedia.
req = requests.get(url)
if not req.ok:
raise RuntimeError('Could not access {0}'.format(url))
soup = BeautifulSoup(req.content, 'lxml')
tbl = soup.find('span', attrs = {'id': 'Partition_type_GUIDs', 'class': 'mw-headline'}).findNext('table').find('tbody')
c = None
t = None
idx = 1
strip_ref = re.compile(r'(?P<name>[A-Za-z\s()/0-9,.+-]+)\[?.*')
for row in tbl.find_all('tr'):
cols = [e.text.strip() for e in row.find_all('td')]
if not cols:
continue
if len(cols) == 3:
temp_c = strip_ref.search(cols[0].strip())
if not temp_c:
raise RuntimeError('Error when parsing/regexing: {0}'.format(cols[0].strip()))
c = temp_c.group('name')
cols.pop(0).strip()
if c not in types['gpt']['wiki']:
types['gpt']['wiki'][c] = []
if len(cols) == 2:
temp_t = strip_ref.search(cols[0].strip())
if not temp_t:
raise RuntimeError('Error when parsing/regexing: {0}'.format(cols[0].strip()))
t = temp_t.group('name')
cols.pop(0)
u = cols[0]
types['gpt']['wiki'][c].append((idx, t, uuid.UUID(hex = u)))
idx += 1

# MSDOS
cmd = ['/usr/bin/sfdisk', '--list-types', '--label=dos']
url = 'https://git.kernel.org/pub/scm/utils/util-linux/util-linux.git/plain/include/pt-mbr-partnames.h'
with open(os.devnull, 'wb') as devnull:
cmd_out = subprocess.run(cmd, stdout = subprocess.PIPE, stderr = devnull)
stdout = [i for i in cmd_out.stdout.decode('utf-8').splitlines() if i not in ('Id Name', '')]
for idx, line in enumerate(stdout):
i = idx + 1
l = line.split()
b = '{0:0>2}'.format(l.pop(0).upper())
desc = ' '.join(l)
types['msdos']['local'].append((i, desc, bytes.fromhex(b)))
# Then the source (master branch's HEAD). It gets messy but whatever. This is actually something unique to fdisk.
req = requests.get(url)
if not req.ok:
raise RuntimeError('Could not access {0}'.format(url))
line_re = re.compile(r'^\s+{0x')
str_re = re.compile(r'^\s+{0x(?P<b>[A-Fa-f0-9]+),\s*N_\("(?P<desc>[^"]+)"\).*')
idx = 1
for line in req.content.decode('utf-8').splitlines():
if not line_re.search(line):
continue
s = str_re.search(line)
if not s:
raise RuntimeError('Error when parsing/regexing: {0}'.format(line.strip()))
b = s.group('b').upper()
desc = s.group('desc')
types['msdos']['src'].append((idx, desc, bytes.fromhex(b)))
idx += 1

print(types)

11
docs/urls Normal file
View File

@ -0,0 +1,11 @@
libblockdev/python gobject-introspection ("gi") API reference:
https://lazka.github.io/pgi-docs/

example of using above for LVM:
https://github.com/storaged-project/libblockdev/blob/master/tests/lvm_test.py


using libnm with pygobject-introspection examples:
https://gitlab.freedesktop.org/NetworkManager/NetworkManager/blob/master/examples/python/gi/
NM.SETTING_CONNECTION_TYPE = https://developer.gnome.org/NetworkManager/stable/ch01.html
https://developer.gnome.org/libnm/stable/ch03.html

273
examples/aif.xml Normal file
View File

@ -0,0 +1,273 @@
<?xml version="1.0" encoding="UTF-8" ?>
<aif xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="https://aif-ng.io/"
xsi:schemaLocation="https://aif-ng.io/ http://schema.xml.r00t2.io/projects/aif.xsd"
chrootPath="/mnt/aif"
reboot="false">
<bootstrap>
<tarball>
https://arch.mirror.square-r00t.net/iso/latest/archlinux-bootstrap-2020.03.01-x86_64.tar.gz
</tarball>
<!-- <tarball>-->
<!-- file:///tmp/archlinux-bootstrap-2020.01.01-x86_64.tar.gz-->
<!-- </tarball>-->
<verify>
<gpg>
<sigs>
<signatureFile>
https://arch.mirror.square-r00t.net/iso/latest/archlinux-bootstrap-2020.03.01-x86_64.tar.gz.sig
</signatureFile>
<!-- <signatureFile>-->
<!-- file:///tmp/archlinux-bootstrap-2020.01.01-x86_64.tar.gz.sig-->
<!-- </signatureFile>-->
</sigs>
<keys detect="false">
<keyID>0x4AA4767BBC9C4B1D18AE28B77F2D434B9741E8AC</keyID>
</keys>
</gpg>
<hash>
<checksumFile hashType="md5" fileType="gnu">
http://arch.mirror.square-r00t.net/iso/latest/md5sums.txt
</checksumFile>
<checksumFile hashType="sha1" fileType="gnu">
http://arch.mirror.square-r00t.net/iso/latest/sha1sums.txt
</checksumFile>
</hash>
</verify>
</bootstrap>
<storage>
<blockDevices>
<disk id="sda" device="/dev/sda" diskFormat="gpt">
<!-- Partitions are numbered *in the order they are specified*. -->
<!-- e.g. "boot" would be /dev/sda1, "secrets1" would be /dev/sda2, etc. -->
<part id="boot" name="BOOT" label="/boot" start="0%" stop="10%" fsType="fat32">
<partitionFlag>esp</partitionFlag>
</part>
<part id="secrets1" name="crypted" label="shh" start="10%" stop="20%" fsType="ext4">
<partitionFlag>root</partitionFlag>
</part>
<part id="lvm_member1" name="jbod" label="dynamic" start="20%" stop="30%" fsType="ext4">
<partitionFlag>lvm</partitionFlag>
</part>
<part id="raid1_d1" start="30%" stop="55%" fsType="ext4">
<partitionFlag>raid</partitionFlag>
</part>
<part id="raid1_d2" start="55%" stop="80%" fsType="ext4">
<partitionFlag>raid</partitionFlag>
</part>
<part id="swapdisk" start="80%" stop="90%" fsType="linux-swap(v1)">
<partitionFlag>swap</partitionFlag>
</part>
<!-- You can also create a partition with no flags (and not use). -->
<part id="grow" start="90%" stop="100%" fsType="ext4"/>
</disk>
</blockDevices>
<!-- "Special" devices are processed *in the order they are specified*. This is important if you wish to
e.g. layer LVM on top of LUKS - you would specify <lvm> before <luks> and reference the
<luksDev id="SOMETHING" ... > as <pv source="SOMETHING" ... />.
Of course, a limitation of this is you cannot e.g. first assemble a LUKS volume, then an LVM
group, and then another LUKS volume - so plan accordingly and/or perform that in
a <post> script instead. -->
<luks>
<luksDev id="luks_secrets" name="secrets" source="secrets1">
<!-- You can assign multiple secrets (or "keys") to a LUKS volume. -->
<secrets>
<!-- A simple passphrase. -->
<passphrase>secrets1</passphrase>
</secrets>
<secrets>
<!-- A key that uses a keyfile on a mounted path. This example uses the passphrase in
a plaintext file, which is in turn read by LUKS. -->
<passphrase>secrets1</passphrase>
<keyFile>/boot/.decrypt.plaintext</keyFile>
</secrets>
<secrets>
<!-- This will generate a 4096-byte file of random data. -->
<keyFile size="4096">/root/.decrypt.key</keyFile>
</secrets>
</luksDev>
</luks>
<lvm>
<volumeGroup id="vg1" name="group1" extentSize="4MiB">
<physicalVolumes>
<pv id="pv1" source="lvm_member1"/>
</physicalVolumes>
<logicalVolumes>
<!-- Default is to add all available PVs in PhysicalVolumes... -->
<lv id="lv1" name="logical1" size="80%"/>
<lv id="lv2" name="logical2" size="512MiB">
<!-- But you can also explicitly designate them. They have to still be in the same volumeGroup.
This is generally speaking a *terrible* idea, though, because it makes getting the
sizes right virtually *impossible*. If you do this, you should consistently ONLY use
bytes for each LV size and know the size of the PVs/VGs ahead of time. -->
<pvMember source="pv1"/>
</lv>
</logicalVolumes>
</volumeGroup>
</lvm>
<mdadm>
<!-- level can be 0, 1, 4, 5, 6, or 10. RAID 1+0 (which is different from mdadm RAID10) would be done by
creating an array with members of a previously defined array. -->
<array id="mdadm1" name="data" meta="1.2" level="1">
<member source="raid1_d1"/>
<member source="raid1_d2"/>
</array>
</mdadm>
<fileSystems>
<fs id="esp" source="boot" type="vfat">
<!-- Supports mkfs arguments. Leave off the filesystem type and device name, obviously;
those are handled by the above attributes. -->
<opt name="-F">32</opt>
<opt name="-n">ESP</opt>
</fs>
<fs id="luks" source="luks_secrets" type="ext4">
<opt name="-L">seekrit</opt>
</fs>
<fs id="swap" source="swap" type="swap"/>
<fs id="vg1-lv1" source="lv1" type="ext4"/>
<fs id="mdraid" source="mdadm1" type="ext4"/>
</fileSystems>
<mountPoints>
<!-- And you use the id to reference mountpoints as well. Important to note, we mount *filesystems*,
not partitions/disks/etc. -->
<!-- Note that targets should be *outside* of the chroot!
e.g. /aif/storage/mountPoints[@target="/mnt/aif/boot"]
and
/aif/system[@chrootPath="/mnt/aif"]
would lead to the filesystem being accessible *inside* the chroot (and thus the completed install)
at /boot. -->
<mount source="luks" target="/mnt/aif">
<opt name="rw"/>
<opt name="relatime"/>
<opt name="compress">lzo</opt>
<opt name="ssd"/>
<opt name="space_cache"/>
<opt name="subvolid">5</opt>
<opt name="subvol">/</opt>
</mount>
<mount source="esp" target="/mnt/aif/boot"/>
<mount source="swap" target="swap"/>
<mount source="vg1-lv1" target="/mnt/aif/mnt/pool"/>
<mount source="mdraid" target="/mnt/aif/mnt/raid"/>
</mountPoints>
</storage>
<network hostname="aiftest.square-r00t.net" provider="netctl">
<ethernet id="lan" device="auto" defroute="true" searchDomain="domain.tld">
<addresses>
<ipv4 auto="true">
<address gateway="192.168.1.1">192.168.1.5/24</address>
</ipv4>
<ipv6 auto="slaac">
<address>fde4:16b9:654b:bbfa::15/64</address>
</ipv6>
</addresses>
<routes>
<ipv4 auto="true">
<route gateway="192.168.1.1">10.1.1.0/24</route>
<route gateway="10.1.1.4">172.16.1.20/32</route>
</ipv4>
<ipv6 auto="true"/>
</routes>
<resolvers>
<ipv4 auto="false"/>
<ipv6 auto="false"/>
<resolver>64.6.64.6</resolver>
<resolver>4.2.2.1</resolver>
<resolver>8.8.8.8</resolver>
</resolvers>
</ethernet>
<wireless id="wlan" device="wlp2s0" essid="MyWirelessLan"
bssid="00-00-5E-00-53-00" defroute="false" searchDomain="wifi.lan">
<addresses>
<ipv4 auto="true"/>
</addresses>
<routes>
<ipv6 auto="true"/>
</routes>
<encryption>
<type>wpa2</type>
<creds>
<psk isKey="false">ABadWiFiPassword</psk>
<!-- Or the key itself. See the manual for ways to generate this. -->
<!-- <psk isKey="true">ca8981cbe55374c7408af0174604588111b4611832969f87fc5604fe4c36365c</psk> -->
</creds>
</encryption>
</wireless>
</network>
<system timezone="EST5EDT">
<rootPassword>
<passwordPlain>1ns3cur3p4ssw0rd</passwordPlain>
</rootPassword>
<locales>
<locale name="LANG">en_US.UTF-8</locale>
</locales>
<console>
<text>
<font>default8x16</font>
</text>
<keyboard>
<map>us</map>
</keyboard>
</console>
<!-- Note: The password hash below is "test"; don't waste your time trying to crack. :) -->
<users>
<user name="aifusr"
home="/opt/aifusr"
sudo="true"
comment="A test user for AIF.">
<password>
<passwordHash hashType="(detect)">
$6$WtxZKOyaahvvWQRG$TUys60kQhF0ffBdnDSJVTA.PovwCOajjMz8HEHL2H0ZMi0bFpDTQvKA7BqzM3nA.ZMAUxNjpJP1dG/eA78Zgw0
</passwordHash>
</password>
<xGroup name="admins" create="true"/>
<xGroup name="wheel"/>
<xGroup name="users"/>
</user>
</users>
<services>
<service status="true">sshd</service>
</services>
</system>
<pacman>
<mirrorList>
<mirror>http://arch.mirror.square-r00t.net/$repo/os/$arch</mirror>
<mirror>http://mirror.us.leaseweb.net/archlinux/$repo/os/$arch</mirror>
<mirror>http://ftp.osuosl.org/pub/archlinux/$repo/os/$arch</mirror>
<mirror>http://arch.mirrors.ionfish.org/$repo/os/$arch</mirror>
<mirror>http://mirrors.gigenet.com/archlinux/$repo/os/$arch</mirror>
<mirror>http://mirror.jmu.edu/pub/archlinux/$repo/os/$arch</mirror>
</mirrorList>
<repos>
<repo name="core" enabled="true" sigLevel="default">
<include>file:///etc/pacman.d/mirrorlist</include>
</repo>
<repo name="extra" enabled="true" sigLevel="default"/>
<repo name="community" enabled="true" sigLevel="default"/>
<repo name="multilib" enabled="true" sigLevel="default"/>
<repo name="testing" enabled="false" sigLevel="default"/>
<repo name="multilib-testing" enabled="false" sigLevel="default"/>
<repo name="sqrt" enabled="false" sigLevel="Required">
<mirror>https://$repo.arch.repo.square-r00t.net</mirror>
</repo>
</repos>
<software>
<package repo="core">sed</package>
<package>python</package>
<package>perl</package>
<package>openssh</package>
</software>
</pacman>
<bootloader type="grub" target="/boot" efi="true"/>
<scripts>
<pre>
<script>https://aif.square-r00t.net/sample-scripts/pre/first.sh</script>
<script>https://aif.square-r00t.net/sample-scripts/pre/second.pl</script>
</pre>
<post>
<script>https://aif.square-r00t.net/sample-scripts/post/first.sh</script>
<script>https://aif.square-r00t.net/sample-scripts/post/second.py</script>
</post>
</scripts>
</aif>

63
examples/most_minimal.xml Normal file
View File

@ -0,0 +1,63 @@
<?xml version="1.0" encoding="UTF-8" ?>
<aif xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://aif-ng.io/"
xsi:schemaLocation="http://aif-ng.io/ http://aif-ng.io/aif.xsd"
version="v2_rewrite">
<storage>
<blockDevices>
<disk id="sda" device="auto" diskFormat="gpt">
<part id="boot" name="BOOT" start="0%" stop="10%" fsType="fat32">
<partitionFlag>esp</partitionFlag>
</part>
<part id="root" name="root" start="10%" stop="100%" fsType="ext4">
<partitionFlag>root</partitionFlag>
</part>
</disk>
</blockDevices>
<fileSystems>
<fs id="esp" source="boot" type="vfat">
<opt name="-F">32</opt>
</fs>
<fs id="rootfs" type="ext4" source="root"/>
</fileSystems>
<mountPoints>
<mount source="rootfs" target="/mnt/aif/"/>
<mount source="esp" target="/mnt/aif/boot"/>
</mountPoints>
</storage>
<network hostname="aiftest.square-r00t.net">
<iface device="auto">
<addresses>
<ipv4>
<address>dhcp</address>
</ipv4>
<ipv6>
<address>slaac</address>
</ipv6>
</addresses>
<resolvers>
<resolver>4.2.2.1</resolver>
<resolver>4.2.2.2</resolver>
<resolver>4.2.2.3</resolver>
</resolvers>
</iface>
</network>
<system timezone="UTC" chrootPath="/mnt/aif" reboot="1">
<locales>
<locale name="LANG">en_US.UTF-8</locale>
</locales>
</system>
<pacman>
<repos>
<repo name="core" enabled="true" sigLevel="default" mirror="file:///etc/pacman.d/mirrorlist"/>
<repo name="extra" enabled="true" sigLevel="default" mirror="file:///etc/pacman.d/mirrorlist"/>
<repo name="community" enabled="true" sigLevel="default" mirror="file:///etc/pacman.d/mirrorlist"/>
<repo name="multilib" enabled="true" sigLevel="default" mirror="file:///etc/pacman.d/mirrorlist"/>
</repos>
<mirrorList>
<mirror>http://arch.mirror.square-r00t.net/$repo/os/$arch</mirror>
</mirrorList>
</pacman>
<bootloader type="grub" target="/boot" efi="true"/>
</aif>

View File

@ -1,104 +0,0 @@
###########################################################
## BUILD.CONF SAMPLE FILE ##
###########################################################
#
# This file is used to define various variables/settings
# used by the build script.
#
# For full (perhaps overly-verbose ;) documentation, please
# see:
# https://bdisk.square-r00t.net/#_the_code_build_ini_code_file
# Or simply refer to the section titled "The build.ini File"
# in the user manual.

[bdisk]
name = AIF
uxname = aif
pname = AIF-NG
ver = 1.00
dev = r00t^2
email = bts@square-r00t.net
desc = See https://aif.square-r00t.net/
uri = https://aif.square-r00t.net/
root_password = BLANK
user = no

[user]
username = ${bdisk:uxname}
name = Default user
password = BLANK

[source_x86_64]
mirror = mirror.us.leaseweb.net
#mirrorproto = https
mirrorproto = http
mirrorpath = /archlinux/iso/latest/
mirrorfile =
mirrorchksum = ${mirrorpath}sha1sums.txt
chksumtype = sha1
mirrorgpgsig = .sig
gpgkey = 4AA4767BBC9C4B1D18AE28B77F2D434B9741E8AC
gpgkeyserver =

[source_i686]
mirror = mirror.us.leaseweb.net
#mirrorproto = https
mirrorproto = http
mirrorpath = /archlinux/iso/latest/
mirrorfile =
mirrorchksum = ${mirrorpath}sha1sums.txt
chksumtype = sha1
mirrorgpgsig = .sig
gpgkey = 7F2D434B9741E8AC
gpgkeyserver =

[build]
gpg = yes
dlpath = /var/tmp/${bdisk:uxname}
chrootdir = /var/tmp/chroots
basedir = /opt/dev/bdisk
isodir = ${dlpath}/iso
srcdir = ${dlpath}/src
prepdir = ${dlpath}/temp
archboot = ${prepdir}/${bdisk:name}
mountpt = /mnt/${bdisk:uxname}
multiarch = 64
sign = yes
ipxe = yes
i_am_a_racecar = yes

[gpg]
mygpgkey = 748231EBCBD808A14F5E85D28C004C2F93481F6B
mygpghome = /root/.gnupg

[sync]
http = yes
tftp = yes
git = no
rsync = no

[http]
path = ${build:dlpath}/http
user = root
group = root

[tftp]
path = ${build:dlpath}/tftpboot
user = root
group = root

[ipxe]
iso = yes
uri = https://aif.square-r00t.net/boot.ipxe
ssldir = ${build:dlpath}/ssl
ssl_ca = ${ssldir}/ca.crt
ssl_cakey = ${ssldir}/ca.key
ssl_crt = ${ssldir}/main.crt
ssl_key = ${ssldir}/main.key

[rsync]
#host = 10.1.1.1
host = bdisk.square-r00t.net
user = root
path = /srv/http/bdisk_ipxe
iso = yes

View File

@ -1,208 +0,0 @@
#!/usr/bin/expect -f

log_file -noappend /tmp/expect.log
set force_conservative 0 ;# set to 1 to force conservative mode even if
;# script wasn't run conservatively originally
if {$force_conservative} {
set send_slow {1 .1}
proc send {ignore arg} {
sleep .1
exp_send -s -- $arg
}
}

#set send_slow {10 .001}

set timeout -1
#spawn ./aif-config.py create -v:r -f /tmp/aif.xml
spawn ./aif-config.py create -v -f /tmp/aif.xml
## disks
send -- "/dev/sda,/dev/sdb\r"
# sda
send -- "gpt\r"
send -- "2\r"
# sda1
send -- "0%\r"
send -- "95%\r"
send -- "8300\r"
# sda2
send -- "95%\r"
send -- "100%\r"
send -- "ef00\r"
# sdb
send -- "gpt\r"
send -- "3\r"
# sdb1
send -- "0%\r"
send -- "47%\r"
send -- "8300\r"
# sdb2
send -- "47%\r"
send -- "95%\r"
send -- "8300\r"
# sdb3
send -- "95%\r"
send -- "100%\r"
send -- "8200\r"
## mounts
send -- "/mnt/aif,/mnt/aif/boot,/mnt/aif/home,/mnt/aif/mnt/data,swap\r"
# /mnt/aif
send -- "/dev/sda1\r"
send -- "1\r"
send -- "ext4\r"
send -- "defaults\r"
# /mnt/aif/boot
send -- "/dev/sda2\r"
send -- "2\r"
send -- "vfat\r"
send -- "rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro\r"
# /mnt/aif/home
send -- "/dev/sdb1\r"
send -- "3\r"
send -- "ext4\r"
send -- "defaults\r"
# /mnt/aif/mnt/data
send -- "/dev/sdb2\r"
send -- "4\r"
send -- "ext4\r"
send -- "defaults\r"
# swap
send -- "/dev/sdb3\r"
send -- "5\r"
## network
# hostname
send -- "aif.loc.lan\r"
# interface
send -- "ens3\r"
send -- "auto\r"
send -- "ipv4\r"
# add another interface?
send -- "y\r"
# second interface
send -- "ens4\r"
send -- "192.168.1.2/24\r"
send -- "192.168.1.1\r"
send -- "4.2.2.1,4.2.2.2\r"
# add another interface? default is no
send -- "\r"
## system
# timezone (default is UTC)
send -- "\r"
# locale (default is en_US.UTF-8
send -- "\r"
# chroot path
send -- "/mnt/aif\r"
# kbd (default is US)
send -- "\r"
# reboot host after install? default is yes
send -- "\r"
# root password
sleep 2
send -- "test\r"
sleep 2
expect *
# add user?
send -- "y\r"
# user
send -- "aifusr\r"
# sudo access
send -- "y\r"
# password
sleep 2
send -- "test\r"
sleep 2
send -- "A Test User\r"
# uid (default is autogen)
send -- "\r"
# primary group (default is autogen'd based on username)
send -- "\r"
# home dir (default is e.g. /home/username)
send -- "\r"
# add exta groups?
send -- "y\r"
# extra group
send -- "users\r"
# need to be created? default is no
send -- "\r"
# add another extra group? default is no
send -- "\r"
# add more users? default is no
send -- "\r"
# enable/disable services
send -- "y\r"
# service
send -- "sshd\r"
# enable? default is yes
send -- "\r"
# manage another service? default is no
send -- "\r"
# packager (default is pacman)
send -- "\r"
# review default repos? default is yes
send -- "\r"
# edit any of them?
send -- "y\r"
# edit the 6th repo (multilib)
send -- "6\r"
# enabled?
send -- "y\r"
# siglevel (default is unchanged)
send -- "\r"
# mirror URI (default is unchanged)
send -- "\r"
# edit another repo? default is no
send -- "\r"
# add additional repositories? default is no
send -- "\r"
# modify default mirrorlist?
send -- "y\r"
# URI for mirror
send -- "http://mirrors.advancedhosters.com/archlinux/\$repo/os/\$arch\r"
# add another?
send -- "y\r"
send -- "http://mirror.us.leaseweb.net/archlinux/\$repo/os/\$arch\r"
send -- "y\r"
send -- "http://arch.mirror.constant.com/\$repo/os/\$arch\r"
send -- "y\r"
send -- "http://mirror.vtti.vt.edu/archlinux/\$repo/os/\$arch\r"
send -- "y\r"
send -- "http://arch.mirrors.pair.com/\$repo/os/\$arch\r"
send -- "y\r"
send -- "http://mirror.yellowfiber.net/archlinux/\$repo/os/\$arch\r"
send -- "\r"
# install extra software?
send -- "y\r"
# software
send -- "openssh\r"
# repository (optional)
send -- "\r"
# add another package?
send -- "\r"
# bootloader (default is grub)
send -- "\r"
# system supports UEFI? default is yes
send -- "\r"
# ESP/EFI system partition
send -- "/boot\r"
# any hook scripts? default is no
send -- "y\r"
# pre, pkg, or post
send -- "post\r"
# script URI
send -- "https://aif.square-r00t.net/sample-scripts/post/first.sh\r"
# order for the execution run
send -- "1\r"
# auth required?
send -- "y\r"
# basic/digest? default is basic
send -- "digest\r"
# if digest, realm
send -- "realmname\r"
# user
send -- "test\r"
# password
send -- "password\r"
# would you like to add another script? default is no
send -- "\r"
interact
expect eof

82
extras/genPSK.py Executable file
View File

@ -0,0 +1,82 @@
#!/usr/bin/env python3

import argparse
import binascii
import getpass
import sys
##
# from passlib.utils import pbkdf2 # deprecated
from passlib.crypto.digest import pbkdf2_hmac


def pskGen(ssid, passphrase):
# raw_psk = pbkdf2.pbkdf2(str(passphrase), str(ssid), 4096, 32) # deprecated
raw_psk = pbkdf2_hmac('sha1', str(passphrase), str(ssid), 4096, 32)
hex_psk = binascii.hexlify(raw_psk)
str_psk = hex_psk.decode('utf-8')
return(str_psk)


def parseArgs():
def essidchk(essid):
essid = str(essid)
if len(essid) > 32:
raise argparse.ArgumentTypeError('The maximum length of an ESSID is 32 characters')
return(essid)

def passphrasechk(passphrase):
if passphrase:
is_piped = False
passphrase = str(passphrase)
if passphrase == '-':
if sys.stdin.isatty():
raise argparse.ArgumentTypeError(('[STDIN] You specified a passphrase to be entered but did not '
'provide one via a pipe.'))
else:
is_piped = True
try:
# WPA-PSK only accepts ASCII for passphrase.
raw_pass = sys.stdin.read().encode('utf-8').decode('ascii').strip('\r').strip('\n')
except UnicodeDecodeError:
raise argparse.ArgumentTypeError('[STDIN] WPA-PSK passphrases must be an ASCII string')
if not 7 < len(passphrase) < 64:
raise argparse.ArgumentTypeError(('{0}WPA-PSK passphrases must be no shorter than 8 characters'
' and no longer than 63 characters. '
'Please ensure you have provided the '
'correct passphrase.').format(('[STDIN] ' if is_piped else '')))
return(passphrase)

args = argparse.ArgumentParser(description = 'Generate a PSK from a passphrase')
args.add_argument('-p', '--passphrase',
dest = 'passphrase',
default = None,
type = passphrasechk,
help = ('If specified, use this passphrase (otherwise securely interactively prompt for it). '
'If "-" (without quotes), read from stdin (via a pipe). '
'WARNING: THIS OPTION IS INSECURE AND MAY EXPOSE THE PASSPHRASE GIVEN '
'TO OTHER PROCESSES ON THIS SYSTEM'))
args.add_argument('ssid',
metavar = 'ESSID',
type = essidchk,
help = ('The ESSID (network name) to use for this passphrase. '
'(This is required because WPA-PSK uses it to salt the key derivation)'))
return(args)


def main():
args = parseArgs().parse_args()
if not args.passphrase:
args.passphrase = getpass.getpass(('Please enter the passphrase for '
'network "{0}" (will NOT echo back): ').format(args.ssid))
args.passphrase = args.passphrase.encode('utf-8').decode('ascii').strip('\r').strip('\n')
if not 7 < len(args.passphrase) < 64:
raise ValueError(('WPA-PSK passphrases must be no shorter than 8 characters'
' and no longer than 63 characters. '
'Please ensure you have provided the correct passphrase.'))
psk = pskGen(args.ssid, args.passphrase)
print('PSK for network "{0}": {1}'.format(args.ssid, psk))
return(None)


if __name__ == '__main__':
main()

View File

@ -1,49 +0,0 @@
#!/usr/bin/env python3

import argparse
import json
import os
import pprint
#import re
try:
import yaml
except:
exit('You need pyYAML.')

def parseArgs():
args = argparse.ArgumentParser()
args.add_argument('-i',
'--in',
dest = 'infile',
required = True,
help = 'The plaintext representation of a python dict')
args.add_argument('-o',
'--out',
dest = 'outfile',
required = True,
help = 'The JSON file to create')
return(args)

def main():
args = vars(parseArgs().parse_args())
infile = os.path.abspath(os.path.normpath(args['infile']))
outfile = os.path.abspath(os.path.normpath(args['outfile']))
if not os.path.lexists(infile):
exit('Input file doesn\'t exist.')
#try:
with open(outfile, 'w') as outgoing:
with open(infile, 'r') as incoming:
#data = re.sub("'", '"', incoming.read())
#outgoing.write(data)
#d = json.dumps(data, ensure_ascii = False)
#d = json.dumps(incoming.read().replace("'", '"'))
d = yaml.load(incoming.read())
pprint.pprint(d)
j = json.dumps(d, indent = 4)
outgoing.write(j)
#except:
#exit('Error when trying to read/write file(s).')
return()

if __name__ == '__main__':
main()

36
setup.py Normal file
View File

@ -0,0 +1,36 @@
import setuptools
import aif.constants_fallback as PROJ_CONST

with open('README', 'r') as fh:
long_description = fh.read()

setuptools.setup(
name = 'aif',
version = PROJ_CONST.VERSION,
author = 'Brent S.',
author_email = 'bts@square-r00t.net',
description = 'Arch Installation Framework (Next Generation)',
long_description = long_description,
long_description_content_type = 'text/plain',
url = 'https://aif-ng.io',
packages = setuptools.find_packages(),
classifiers = ['Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
'Programming Language :: Python :: 3 :: Only',
'Operating System :: POSIX :: Linux',
'License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)',
'Intended Audience :: Developers',
'Intended Audience :: Information Technology',
'Topic :: Software Development :: Build Tools',
'Topic :: Software Development :: Testing',
'Topic :: System :: Installation/Setup',
'Topic :: System :: Recovery Tools'],
python_requires = '>=3.6',
project_urls = {'Documentation': 'https://aif-ng.io/',
'Source': 'https://git.square-r00t.net/AIF-NG/',
'Tracker': 'https://bugs.square-r00t.net/index.php?project=9'},
install_requires = PROJ_CONST.EXTERNAL_DEPS
)