Discussion:
home on afs woes
(too old to reply)
Juha Jäykkä
2006-01-04 11:31:00 UTC
Permalink
Hi!

We have a nicely working Heimdal + LDAP database (the OS is Debian/GNU
Linux), providing user authentication and authorization. Now, we would
like to move from somewhat unreliable NFS to something more robust - and
especially something more firewall friendly - and are considering OpenAFS.
There are some concerns, though.

When I configure the user home directory to reside on the afs, ssh logins
no longer work. This is due to the pam_krb5.so module trying to check the
.k5login in the user's home directory. This fails because root running
sshd does not have valid afs tokens. I rewrote the sshd startup script to
obtain both the Heimdal TGT and the corresponding afs token. Now it can
access the .k5login (which does not exist, by the way - pma_krb5.so seems
to fail trying to stat() the file, not because it does not contain the
proper principal). This introduced another problem: if the user logs in,
the user gets the token root obtained for sshd! I just wonder, why is
this? This might be relatively easy to hack around except that if the user
ever unlogs, the process running sshd loses access to afs as well.

What is The Way to have all three (afs, heimdal and sshd) work together?

The versions I use are OpenSSH 4.2, Heimdal 0.7.1, OpenAFS 1.3.81 and a
CVS build of pam_krb5.so (from 31.12.2005). PAM is configured as follows:

auth sufficient pam_krb5.so external forwardable use_shmem debug
auth required pam_unix.so try_first_pass

account sufficient pam_krb5.so external forwardable use_shmem debug
account required pam_unix.so

session sufficient pam_krb5.so external forwardable use_shmem debug
session required pam_unix.so

Thanks for any help and/or pointers!

-Juha

P.S. Moderator: sorry for bothering you earlier with the same message
mistakenly sent from a non-list address!

--
-----------------------------------------------
| Juha Jäykkä, ***@utu.fi |
| Laboratory of Theoretical Physics |
| Department of Physics, University of Turku |
| home: http://www.utu.fi/~juolja/ |
-----------------------------------------------
Andrei Maslennikov
2006-01-04 13:16:04 UTC
Permalink
Try this:

/afs/italia/project/bigbox/e3/i386/updates/openssh*
(You will have to install the right /etc/krb5.conf and /etc/krb5.keytab).

Andrei.

On 1/4/06, Juha Jäykkä <***@utu.fi> wrote:
What is The Way to have all three (afs, heimdal and sshd) work together?
Juha Jäykkä
2006-01-04 14:45:30 UTC
Permalink
Post by Andrei Maslennikov
/afs/italia/project/bigbox/e3/i386/updates/openssh*
You don't happen to have the sources around? I can convert the binary rpms
to binary debs, but that will eventually lead to library incompatibilities
and such. What kind of modifications have you done to the ssh server?

In my opinion, the problem is pam_krb5.so, which checks the .k5login file
in pam_sm_authenticate(). Its own documentation says it only checks
.k5login in pam_sm_acct_mgmt(), but this is incorrect. I am not sure this
is a bug, though, and therefore haven't reported it. I just thought there
must be people around who have these three working together and they must
have a solution which is more general than depending on a single pam
module. Comments?

-Juha

--
-----------------------------------------------
| Juha Jäykkä, ***@utu.fi |
| Laboratory of Theoretical Physics |
| Department of Physics, University of Turku |
| home: http://www.utu.fi/~juolja/ |
-----------------------------------------------
Russ Allbery
2006-01-04 18:21:37 UTC
Permalink
Post by Juha Jäykkä
In my opinion, the problem is pam_krb5.so, which checks the .k5login
file in pam_sm_authenticate(). Its own documentation says it only checks
.k5login in pam_sm_acct_mgmt(), but this is incorrect. I am not sure
this is a bug, though, and therefore haven't reported it. I just thought
there must be people around who have these three working together and
they must have a solution which is more general than depending on a
single pam module. Comments?
.klogin and .k5login files have always had to be world-readable. Consider
the case with ssh and forwarded credentials. You have to authenticate the
user before you can accept tickets for them, and in order to authenticate
the user you have to be able to check the .k5login file. Not checking the
.k5login file at the time of authentication is a bug; you may authenticate
a user who shouldn't be allowed to log in, and there are indeed programs
(xlockmore, for instance) that only call pam_authenticate.

The solution is to create a world-, or at least local-network-, readable
directory in every user's home directory, grant l access to the top level
of their home directory, move .k5login to the readable directory, and
symlink it. So far as I know, every site that uses AFS with Kerberos has
had to deal with this; Stanford has been doing this for all users for over
a decade. The l ACLs on the top level of the home directory are rather
unfortunate, but the other ways to work around this are much more complex.
--
Russ Allbery (***@stanford.edu) <http://www.eyrie.org/~eagle/>
Douglas E. Engert
2006-01-04 19:43:58 UTC
Permalink
Post by Russ Allbery
Post by Juha Jäykkä
In my opinion, the problem is pam_krb5.so, which checks the .k5login
file in pam_sm_authenticate(). Its own documentation says it only checks
.k5login in pam_sm_acct_mgmt(), but this is incorrect. I am not sure
this is a bug, though, and therefore haven't reported it. I just thought
there must be people around who have these three working together and
they must have a solution which is more general than depending on a
single pam module. Comments?
.klogin and .k5login files have always had to be world-readable. Consider
the case with ssh and forwarded credentials. You have to authenticate the
user before you can accept tickets for them, and in order to authenticate
the user you have to be able to check the .k5login file.
I have always argued that there was some room for improvment in this area.
The .k5login should not have to be world readable. Let me explain my argument.

The sshd could accept a forwarded ticket for the sole purpose of using it to get
an AFS token so the sshd could access the .k5login file before the krb5_kuserok
was called (There might be some other dot files that could also be accessed early.)
Getting this ticket early does not changed the security model, as the checking of
the .k5login is to allow access to the local machine, not the AFS file system.
The forwarded ticket and token could be discarded if the krb5_kuserok fails.

This would require some changes to do this, and I don't know of any site
that has done this. Its too ingrained in Unix that root has access to the
home directory during login.
Post by Russ Allbery
Not checking the
.k5login file at the time of authentication is a bug; you may authenticate
a user who shouldn't be allowed to log in, and there are indeed programs
(xlockmore, for instance) that only call pam_authenticate.
The solution is to create a world-, or at least local-network-, readable
directory in every user's home directory, grant l access to the top level
of their home directory, move .k5login to the readable directory, and
symlink it. So far as I know, every site that uses AFS with Kerberos has
had to deal with this; Stanford has been doing this for all users for over
a decade. The l ACLs on the top level of the home directory are rather
unfortunate, but the other ways to work around this are much more complex.
We do this too.

Any distributed file system has the same problem, if files in the home
directory need to be accessed during login. NFSv4 may have to address the
same problems.

,
--
Douglas E. Engert <***@anl.gov>
Argonne National Laboratory
9700 South Cass Avenue
Argonne, Illinois 60439
(630) 252-5444
Russ Allbery
2006-01-04 19:55:26 UTC
Permalink
Post by Douglas E. Engert
The sshd could accept a forwarded ticket for the sole purpose of using
it to get an AFS token so the sshd could access the .k5login file before
the krb5_kuserok was called (There might be some other dot files that
could also be accessed early.) Getting this ticket early does not
changed the security model, as the checking of the .k5login is to allow
access to the local machine, not the AFS file system. The forwarded
ticket and token could be discarded if the krb5_kuserok fails.
The client is, understandably, not going to forward the ticket until after
the authentication step is complete, so what this basically means is
authenticating the user, accepting the forwarded ticket, and then
reauthenticating the user. I guess it would be possible to do this, but
ew. I'm guessing ew would be the OpenSSH upstream reaction too.

And this doesn't help with the PAM situation, where you don't get an AFS
token until after pam_setcred is called, which is after pam_authenticate,
and some programs only call pam_authenticate and never call the other PAM
functions. This is probably wrong of them, but still, it shouldn't
introduce a security hole.

I suppose you could fall back on the standard PAM cheat of doing
everything in pam_authenticate and making everything else a no-op, but
that too breaks in other situations where people call pam_authenticate in
a different context than pam_setcred (OpenSSH is again at fault).

I don't see a good solution to this, unfortunately. I wish that AFS
supported the directory lookup semantics supported in Unix with execute
but no read, but I can see why that would be rather hard to do.
--
Russ Allbery (***@stanford.edu) <http://www.eyrie.org/~eagle/>
Jeffrey Altman
2006-01-04 20:02:20 UTC
Permalink
Post by Russ Allbery
The client is, understandably, not going to forward the ticket until after
the authentication step is complete, so what this basically means is
authenticating the user, accepting the forwarded ticket, and then
reauthenticating the user. I guess it would be possible to do this, but
ew. I'm guessing ew would be the OpenSSH upstream reaction too.
Processing of the .k5login file is not an authentication operation,
it is an authorization operation. Therefore, it is perfectly reasonable
for the client to mutually authenticate with a server, forward a ticket
and then have access rejected due to an authorization failure.

Jeffrey Altman
Russ Allbery
2006-01-04 21:36:03 UTC
Permalink
Processing of the .k5login file is not an authentication operation, it
is an authorization operation. Therefore, it is perfectly reasonable
for the client to mutually authenticate with a server, forward a ticket
and then have access rejected due to an authorization failure.
Hm, yes, that's a good point.

Okay, I withdraw my objection about how this works with OpenSSH
forwarding; my only concern is for how to do the right thing in PAM
modules then.
--
Russ Allbery (***@stanford.edu) <http://www.eyrie.org/~eagle/>
Jeffrey Hutzelman
2006-01-04 22:55:04 UTC
Permalink
On Wednesday, January 04, 2006 03:02:20 PM -0500 Jeffrey Altman
Post by Jeffrey Altman
Post by Russ Allbery
The client is, understandably, not going to forward the ticket until
after the authentication step is complete, so what this basically means
is authenticating the user, accepting the forwarded ticket, and then
reauthenticating the user. I guess it would be possible to do this, but
ew. I'm guessing ew would be the OpenSSH upstream reaction too.
Processing of the .k5login file is not an authentication operation,
it is an authorization operation.
Conceptually, yes.
In the PAM world, authorization checks such as this are done as part of the
"authenticate" operation, not the "account management" operation.

For cases where authentication is not done using PAM, such as sshd using
gssapi user auth, the application is responsible for performing whatever
authorization checks are required. In ssh, this is done as part of the
user authentication operation.

-- Jeff
Juha Jäykkä
2006-01-05 13:26:23 UTC
Permalink
Post by Jeffrey Hutzelman
Conceptually, yes.
In the PAM world, authorization checks such as this are done as part of
the "authenticate" operation, not the "account management" operation.
Seems to me, then, that PAM is lacking proper handling of user
authorization. It may not be much different from handling authorization
and authentication together, but looks like having different hooks for
these different things might be a good idea. Go whine to PAM people? =)

As what comes to various other things discussed under the topic, the first
solution I came up with was to add the sshd host to PTS, and give rl to
this principal, but sshd *leaks* this token to the user. Is this actually
a PAG problem?

I put the symlinks in place and things are fine, so thanks for help!

[Russ: the earlier problem on debian-devel was indeed related to the
aes keys, so thanks for that, too.]

-Juha

--
-----------------------------------------------
| Juha Jäykkä, ***@utu.fi |
| Laboratory of Theoretical Physics |
| Department of Physics, University of Turku |
| home: http://www.utu.fi/~juolja/ |
-----------------------------------------------
Russ Allbery
2006-01-09 23:25:05 UTC
Permalink
Post by Juha Jäykkä
Seems to me, then, that PAM is lacking proper handling of user
authorization. It may not be much different from handling authorization
and authentication together, but looks like having different hooks for
these different things might be a good idea. Go whine to PAM people? =)
PAM is lacking a lot of things, the worst of which being a clear and
complete specification that everyone follows. The problem is not so much
knowing what it needs as figuring out how to get there from here.
Unfortunately, everyone has implemented PAM, everyone has hacked around
each other's bugs, and there are dozens of different ways of doing PAM
that all sort of work and that are all used in practice.

I'm not sure how you'd practically manage to fix the problem without
introducing a new protocol and a new API that's clearer and more strictly
enforced up-front and making everyone migrate, something that would most
likely take a decade.

I'm not particularly optimistic about making significant changes to PAM at
this point. For the time being, we probably have to live largely with
what we've got.
Post by Juha Jäykkä
As what comes to various other things discussed under the topic, the
first solution I came up with was to add the sshd host to PTS, and give
rl to this principal, but sshd *leaks* this token to the user. Is this
actually a PAG problem?
sshd won't leak this token to the user if your PAM setup is appropriate.
You have to make sure that the user is put into their own PAG as part of
the session initialization process, even if they don't get a token.
Post by Juha Jäykkä
[Russ: the earlier problem on debian-devel was indeed related to the
aes keys, so thanks for that, too.]
Ah! Thank you for saying! I never would have guessed that, and now I'll
know for the future.
--
Russ Allbery (***@stanford.edu) <http://www.eyrie.org/~eagle/>
Juha Jäykkä
2006-01-11 07:17:28 UTC
Permalink
Post by Russ Allbery
sshd won't leak this token to the user if your PAM setup is appropriate.
You have to make sure that the user is put into their own PAG as part of
the session initialization process, even if they don't get a token.
I would have thought pam_krb5.so [1] does this by itself, but apparently I
am mistaken (again). While it would be relatively easy to write a small
pam module to handle the creation of a suitable PAG, I must wonder whether
one exists already? Anything that depends on aklog from openafs-krb5 will
not do since it just segfaults (probably the AES keys again, but I did not
test this point).

By the way, is Heimdal's kinit/afslog at fault here for not creating the
proper PAG? It's very convenient to have kinit do all the tricks, but if
it does them wrong...
Post by Russ Allbery
Ah! Thank you for saying! I never would have guessed that, and now
I'll know for the future.
You're welcome.

Cheers,
Juha

[1] The version from :pserver:***@rhlinux.redhat.com:/usr/local/CVS -
it looks like it's the old RedHat pam_krb5.so emerged with the sf.net
version and with still active development unlike any other pam_krb5.so I
can find.

--
-----------------------------------------------
| Juha Jäykkä, ***@utu.fi |
| Laboratory of Theoretical Physics |
| Department of Physics, University of Turku |
| home: http://www.utu.fi/~juolja/ |
-----------------------------------------------
Russ Allbery
2006-01-11 08:45:38 UTC
Permalink
Post by Juha Jäykkä
I would have thought pam_krb5.so [1] does this by itself, but apparently
I am mistaken (again).
It's only a PAM module for Kerberos. It doesn't know anything about AFS.
Post by Juha Jäykkä
While it would be relatively easy to write a small pam module to handle
the creation of a suitable PAG, I must wonder whether one exists
already?
libpam-openafs-session in Debian. There are others floating around as
well.
Post by Juha Jäykkä
Anything that depends on aklog from openafs-krb5 will not do since it
just segfaults (probably the AES keys again, but I did not test this
point).
Ah. Well, either you're going to have to create a DES key for AFS or
you're going to have to run the kaserver and use Kerberos v4 for AFS. AFS
doesn't do AES, at all. If you do have a DES key for AFS, I don't see why
that aklog wouldn't work, but it's also fairly old. Soon we'll have the
OpenAFS aklog packaged for Debian.
Post by Juha Jäykkä
By the way, is Heimdal's kinit/afslog at fault here for not creating the
proper PAG?
Generally a process has to put itself in a PAG. There's an ugly hack for
putting your parent process in a PAG (and for right now
libpam-openafs-session even relies on it), but it's not the default. You
don't really want to do that without being in control of it; otherwise,
running kinit would, for instance, sever your PAG from the PAG of any
background processes spawned in the same shell. That's not what people
normally expect to have happen.
Post by Juha Jäykkä
- it looks like it's the old RedHat pam_krb5.so emerged with the sf.net
version and with still active development unlike any other pam_krb5.so I
can find.
The Red Hat Kerberos PAM module scares me. The PAM module in Debian is
under active development with a different upstream and handles some things
better (and will handle quite a few more things better when I find time to
get the next version uploaded).
--
Russ Allbery (***@stanford.edu) <http://www.eyrie.org/~eagle/>
Russ Allbery
2006-01-11 19:17:59 UTC
Permalink
I disagree.
Russ Allbery
2006-01-11 22:33:37 UTC
Permalink
Ah, okay, I didn't realize that.
It's the best working solution I have been able to come up with. Its
being monolithic makes it non-ideal, but it seems to work fine. It even
parses krb5.conf's [appdefaults] pam = { ... } and is easy to
configure. It even allows me to set non-default renew_timeouts and
such. And it handles ssh/gssapi just fine. (Provided the symlink hassle
in /afs/.../home/...)
Yeah, this is part of what scares me about it, since it builds its own
krb5.conf parser using lex and yacc. Hopefully the new Kerberos v5
profile library API that's supposed to be coming in the next major release
will obviate the need for doing anything this horrible.
I was curious and installed openafs-krb5 on one machine, ran aklog in
gdb and did a stack trace after the segfault. It dies in
krb5_get_host_realm() in libkrb5.so.3. It happens krb5_get_host_realm()
cannot handle an *indented* comment within [domain_realm]! That is,
[domain_realm]
# foo
.tfy.utu.fi = TFY.UTU.FI
causes a SIGSEGV, while
[domain_realm]
# foo
.tfy.utu.fi = TFY.UTU.FI
does not.
This was fixed in the MIT Kerberos packages in Debian in version 1.3.6-4:

* Allow whitespace before comments in krb5.conf. Thanks, Jeremie
Koenig. (Closes: #314609)

but as I recall, you're using stable, which missed this fix by two Debian
packages.

It's MIT Kerberos RT #1988 and is one of 14 patches that are in the
current Debian packages and have been submitted upstream but which I don't
believe have been committed to the krb5 source tree yet. :/
I'll go back to checking the openafs-krb5 stuff now since aklog now
works. I would also appreciate any help on making aklog compile agains
Heimdal, but it seems like a bigger thing - there are so many things to
tackle.
You probably don't really need to do this, as Heimdal comes with an afslog
that should work fine -- although, I don't know if it supports the -setpag
flag to set a PAG for the parent process. Unfortunately, doing PAM
properly requires either that or linking with the AFS libraries.

Linking with the AFS libraries will be easier in the 1.4.1 release since
there will then be a shared library that contains only the lsetpag()
function, at which point my intention is to significantly overhaul the way
that PAGs and aklog are handled in Debian.
You'be been extremely helpful already. Thank you. It is not very common
to find people as helpful as you.
I just wish I was better at understanding or guessing at what issues
you're running into. :/ For whatever reason, I've guessed wrong rather
more frequently than I usually do. But I'm learning a lot in the process!
--
Russ Allbery (***@stanford.edu) <http://www.eyrie.org/~eagle/>
Douglas E. Engert
2006-01-11 22:46:09 UTC
Permalink
Post by Russ Allbery
You probably don't really need to do this, as Heimdal comes with an afslog
that should work fine -- although, I don't know if it supports the -setpag
flag to set a PAG for the parent process. Unfortunately, doing PAM
properly requires either that or linking with the AFS libraries.
Its actually knowing what the syscall is that AFS uses to set the PAG.
and what to do if the syscall fails. You don't really have to link in
any AFS library. The pam_afs2 does this, and uses no AFS or Kerberos libs.

It would be nice AFS provided a header file or picked up the pam_afs2.
--
Douglas E. Engert <***@anl.gov>
Argonne National Laboratory
9700 South Cass Avenue
Argonne, Illinois 60439
(630) 252-5444
Russ Allbery
2006-01-11 22:50:51 UTC
Permalink
Post by Douglas E. Engert
Its actually knowing what the syscall is that AFS uses to set the PAG.
and what to do if the syscall fails. You don't really have to link in
any AFS library. The pam_afs2 does this, and uses no AFS or Kerberos libs.
It would be nice AFS provided a header file or picked up the pam_afs2.
Yeah, you can do it that way too right now, but I don't want to assume
that we're always going to use a system call or that it's always going to
be as simple as calling syscall. I can imagine possible future changes
(particularly with Linux, where adding syscalls is a pain in the ass)
where this might not necessarily be the case.

I think a simple shared library is a better long-term solution for
providing the interface than just a header file, since it can then cope
with such changes and should have an exceedingly stable API.
--
Russ Allbery (***@stanford.edu) <http://www.eyrie.org/~eagle/>
Jeffrey Hutzelman
2006-01-11 23:06:05 UTC
Permalink
On Wednesday, January 11, 2006 02:33:37 PM -0800 Russ Allbery
Post by Russ Allbery
You probably don't really need to do this, as Heimdal comes with an afslog
that should work fine -- although, I don't know if it supports the -setpag
flag to set a PAG for the parent process. Unfortunately, doing PAM
properly requires either that or linking with the AFS libraries.
No it doesn't. All it needs is a relatively simple library which provides
a way to check for the presence of AFS and to make AFS system calls,
particularly setpag and pioctl.

Conveniently, heimdal comes with such a library; it's called libkafs and
includes functions like k_hasafs(), k_pioctl(), k_unlog(), and k_setpag().
IIRC, Derrick used to distribute a standalone version of this library for
people not using Heimdal, but it's probably pretty stale by now.
Post by Russ Allbery
Linking with the AFS libraries will be easier in the 1.4.1 release since
there will then be a shared library that contains only the lsetpag()
function, at which point my intention is to significantly overhaul the way
that PAGs and aklog are handled in Debian.
Ugh. lsetpag() is not really intended to be a public interface. The
public interface provided by AFS is called setpag().

Also, I'd suggest that instead of a shared library containing only lsetpag,
it might be better to provide a library containing the functions I named
above, with the same API that the KTH folks have been distributing for
years.

-- Jeff
Russ Allbery
2006-01-11 23:15:56 UTC
Permalink
Post by Jeffrey Hutzelman
No it doesn't. All it needs is a relatively simple library which
provides a way to check for the presence of AFS and to make AFS system
calls, particularly setpag and pioctl.
Conveniently, heimdal comes with such a library; it's called libkafs and
includes functions like k_hasafs(), k_pioctl(), k_unlog(), and
k_setpag(). IIRC, Derrick used to distribute a standalone version of
this library for people not using Heimdal, but it's probably pretty
stale by now.
Certainly such a library would be useful. If someone wants to implement
it without all of the dependencies that Heimdal libkafs has, I'd be all in
favor of it. Heimdal's libkafs has significant dependencies and isn't
really suitable for use outside of a system built with Heimdal in general.
Post by Jeffrey Hutzelman
Ugh. lsetpag() is not really intended to be a public interface. The
public interface provided by AFS is called setpag().
Providing setpag in a shared library that's actually maintainable is,
right now, unfeasible in OpenAFS without a great deal of work. If you
don't agree, please do say so, but I looked at it somewhat seriously and
it was way more work that I thought was likely to happen. It requires,
for instance, pulling in all of Rx.

Doing that work requires caring way more about the NFS translator than I
do, I'm afraid.
Post by Jeffrey Hutzelman
Also, I'd suggest that instead of a shared library containing only
lsetpag, it might be better to provide a library containing the
functions I named above, with the same API that the KTH folks have been
distributing for years.
I think it would be better, yes, but it's a lot more work if you want to
really match APIs, since that requires providing krb_afslog and
krb5_afslog.

Basically, we'd be talking about taking most of aklog and putting it in a
library. Which isn't at all a bad idea, but it's a larger project, and
I'd really like to see a solution for this in the shorter term. Long
term, absolutely, I think that's a fine idea, and if we can keep it
API-compatible with Heimdal, that would be awesome.
--
Russ Allbery (***@stanford.edu) <http://www.eyrie.org/~eagle/>
Jeffrey Hutzelman
2006-01-11 23:42:41 UTC
Permalink
On Wednesday, January 11, 2006 03:15:56 PM -0800 Russ Allbery
Post by Russ Allbery
Post by Jeffrey Hutzelman
No it doesn't. All it needs is a relatively simple library which
provides a way to check for the presence of AFS and to make AFS system
calls, particularly setpag and pioctl.
Conveniently, heimdal comes with such a library; it's called libkafs and
includes functions like k_hasafs(), k_pioctl(), k_unlog(), and
k_setpag(). IIRC, Derrick used to distribute a standalone version of
this library for people not using Heimdal, but it's probably pretty
stale by now.
Certainly such a library would be useful. If someone wants to implement
it without all of the dependencies that Heimdal libkafs has, I'd be all in
favor of it. Heimdal's libkafs has significant dependencies and isn't
really suitable for use outside of a system built with Heimdal in general.
That's true for the full libkafs, but not for the functions I mentioned.
Post by Russ Allbery
Post by Jeffrey Hutzelman
Ugh. lsetpag() is not really intended to be a public interface. The
public interface provided by AFS is called setpag().
Providing setpag in a shared library that's actually maintainable is,
right now, unfeasible in OpenAFS without a great deal of work. If you
don't agree, please do say so, but I looked at it somewhat seriously and
it was way more work that I thought was likely to happen. It requires,
for instance, pulling in all of Rx.
I was talking about interfaces, not implementations. I don't have any
problem with a library which implements the setpag() API without rmtsys
support. But a library which exports only lsetpag() is not all the useful.
Post by Russ Allbery
Post by Jeffrey Hutzelman
Also, I'd suggest that instead of a shared library containing only
lsetpag, it might be better to provide a library containing the
functions I named above, with the same API that the KTH folks have been
distributing for years.
I think it would be better, yes, but it's a lot more work if you want to
really match APIs, since that requires providing krb_afslog and
krb5_afslog.
The functions I named don't require providing krb_afslog or krb5_afslog.
They are just an API for making AFS syscalls.

I'm not sure if this is the right approach, but we could even provide a
library named libkafs.so, containing only the k_* functions I mentioned,
with the idea being that we'd eventually provide the rest (hopefully
eliminating the need for Kerberos implementations to do so), and in the
meantime, people who want the full interface can replace our library with
Heimdal's.

-- Jeff
Russ Allbery
2006-01-12 00:00:47 UTC
Permalink
Post by Jeffrey Hutzelman
Post by Russ Allbery
Certainly such a library would be useful. If someone wants to
implement it without all of the dependencies that Heimdal libkafs has,
I'd be all in favor of it. Heimdal's libkafs has significant
dependencies and isn't really suitable for use outside of a system
built with Heimdal in general.
That's true for the full libkafs, but not for the functions I mentioned.
Ah, hm, I see what you're saying. So basically, the API provided would be
only the k_* functions, namely:

k_hasafs
k_pioctl
k_unlog
k_setpag
k_afs_cell_of_file

the two AFSCALL defines, the VIOC* defines, and struct ViceIoctl?
Post by Jeffrey Hutzelman
I was talking about interfaces, not implementations. I don't have any
problem with a library which implements the setpag() API without rmtsys
support. But a library which exports only lsetpag() is not all the useful.
I was trying for something minimally intrusive, which was ruling out
changing the name of the function (since otherwise I couldn't just reuse
the same code). If we do something like the above, I wonder if it
wouldn't possibly be better to copy the required code into a new source
directory. I'm not sure how best to do this cleanly.
Post by Jeffrey Hutzelman
I'm not sure if this is the right approach, but we could even provide a
library named libkafs.so, containing only the k_* functions I mentioned,
with the idea being that we'd eventually provide the rest (hopefully
eliminating the need for Kerberos implementations to do so), and in the
meantime, people who want the full interface can replace our library
with Heimdal's.
Hm, that's a thought. I'm not sure if that's a good idea either, but it
has some definite advantages.

On the other hand, it's rather a pain to deal with from a distribution
perspective, since you run into conflicts between the two libraries and
have to be careful about SONAMEs, library versioning, etc. This can be
dealt with (both Heimdal and MIT provide a libkrb5, for instance), but
it's ugly.
--
Russ Allbery (***@stanford.edu) <http://www.eyrie.org/~eagle/>
Jeffrey Hutzelman
2006-01-12 02:45:56 UTC
Permalink
On Wednesday, January 11, 2006 04:00:47 PM -0800 Russ Allbery
Post by Russ Allbery
Post by Jeffrey Hutzelman
Post by Russ Allbery
Certainly such a library would be useful. If someone wants to
implement it without all of the dependencies that Heimdal libkafs has,
I'd be all in favor of it. Heimdal's libkafs has significant
dependencies and isn't really suitable for use outside of a system
built with Heimdal in general.
That's true for the full libkafs, but not for the functions I mentioned.
Ah, hm, I see what you're saying. So basically, the API provided would be
k_hasafs
k_pioctl
k_unlog
k_setpag
k_afs_cell_of_file
the two AFSCALL defines, the VIOC* defines, and struct ViceIoctl?
Actually, I'm not sure why k_afs_cell_of_file is interesting; I thought
that was just a pioctl. But it could be thrown in for good measure.

The AFSCALL defines are actually completely uninteresting to users of that
API, since it doesn't provide any way to call arbitrary AFS syscalls. As
for the VIOC defines, we should do something, but I'm not too thrilled with
the idea of OpenAFS containing two independently-maintained lists of those.

I suppose I don't get to complain about that until I start distributing an
authoritative header to go with the tables at grand.central.org/numbers.
Post by Russ Allbery
Post by Jeffrey Hutzelman
I was talking about interfaces, not implementations. I don't have any
problem with a library which implements the setpag() API without rmtsys
support. But a library which exports only lsetpag() is not all the useful.
I was trying for something minimally intrusive, which was ruling out
changing the name of the function (since otherwise I couldn't just reuse
the same code).
echo 'int setpag(void) { return lsetpag(); }' | gcc -x c -c - -o setpag.o

:-)

(what's sad is that on my amd64_linux26 machine, that produces a 1344-byte
file, of which 16 bytes are actually code. Talk about bloat!
Post by Russ Allbery
If we do something like the above, I wonder if it
wouldn't possibly be better to copy the required code into a new source
directory. I'm not sure how best to do this cleanly.
Well, we more or less need a separate directory for each library we build;
that's just how the build system is. So essentially what you need is to
build src/sys/afssyscalls.c in more than one place. That's not too hard;
we do it in several places already. It would presumably need some bashing
to be able to provide both sets of interfaces. I'm wary of implementing
lpioctl in terms of k_pioctl or vice versa, because that sounds like it's
asking for symbol conflicts if someone uses libsys and libkafs (ours or the
real one) in the same binary.
Post by Russ Allbery
On the other hand, it's rather a pain to deal with from a distribution
perspective, since you run into conflicts between the two libraries and
have to be careful about SONAMEs, library versioning, etc. This can be
dealt with (both Heimdal and MIT provide a libkrb5, for instance), but
it's ugly.
True. Like I said, I'm not sure it's the right approach, but it might be.
Some distribution pain might be worth people being able to build AFS-aware
apps that can be built against either heimdal or openafs without requiring
the other to be present...

-- Jeff
Douglas E. Engert
2006-01-12 15:10:11 UTC
Permalink
Post by Russ Allbery
Post by Jeffrey Hutzelman
No it doesn't. All it needs is a relatively simple library which
provides a way to check for the presence of AFS and to make AFS system
calls, particularly setpag and pioctl.
Conveniently, heimdal comes with such a library; it's called libkafs and
includes functions like k_hasafs(), k_pioctl(), k_unlog(), and
k_setpag(). IIRC, Derrick used to distribute a standalone version of
this library for people not using Heimdal, but it's probably pretty
stale by now.
Certainly such a library would be useful. If someone wants to implement
it without all of the dependencies that Heimdal libkafs has, I'd be all in
favor of it. Heimdal's libkafs has significant dependencies and isn't
really suitable for use outside of a system built with Heimdal in general.
Post by Jeffrey Hutzelman
Ugh. lsetpag() is not really intended to be a public interface. The
public interface provided by AFS is called setpag().
Providing setpag in a shared library that's actually maintainable is,
right now, unfeasible in OpenAFS without a great deal of work. If you
don't agree, please do say so, but I looked at it somewhat seriously and
it was way more work that I thought was likely to happen. It requires,
for instance, pulling in all of Rx.
I agree, setpag has too much baggage added for the translator I believe.
lsetpag is basiclly a syscall.
Post by Russ Allbery
Doing that work requires caring way more about the NFS translator than I
do, I'm afraid.
Post by Jeffrey Hutzelman
Also, I'd suggest that instead of a shared library containing only
lsetpag, it might be better to provide a library containing the
functions I named above, with the same API that the KTH folks have been
distributing for years.
I think it would be better, yes, but it's a lot more work if you want to
really match APIs, since that requires providing krb_afslog and
krb5_afslog.
Setting of the PAG has little to do with Kerberos, and no Kerberos
code is need to set the PAG. The idea is to get the PAG at the daemon
process, then later on in other processes or even in the same process add tokens
under the PAG.
Post by Russ Allbery
Basically, we'd be talking about taking most of aklog and putting it in a
library. Which isn't at all a bad idea, but it's a larger project, and
I'd really like to see a solution for this in the shorter term. Long
term, absolutely, I think that's a fine idea, and if we can keep it
API-compatible with Heimdal, that would be awesome.
Yes, aklog in a library, fine, but keep the setting of the PAG separate.
Ideally, you would like the setting of the PAG to be so trivial, that
any vendor would always add this into all of their daemons, even if
AFS was not present.

If you want something short term, see the gafstoken which is used
by pam_afs2. It traps SIGSYS and SIGSEGV, then does a syscall,
on linux, tries the open(PROC_SYSCALL_FNAME, O_RDWR); and
open(PROC_SYSCALL_ARLA_FNAME, O_RDWR); first, then tries a syscall.
--
Douglas E. Engert <***@anl.gov>
Argonne National Laboratory
9700 South Cass Avenue
Argonne, Illinois 60439
(630) 252-5444
Juha Jäykkä
2006-01-11 22:02:14 UTC
Permalink
Ah, okay, I didn't realize that.
It's the best working solution I have been able to come up with. Its being
monolithic makes it non-ideal, but it seems to work fine. It even parses
krb5.conf's [appdefaults] pam = { ... } and is easy to configure. It even
allows me to set non-default renew_timeouts and such. And it handles
ssh/gssapi just fine. (Provided the symlink hassle in /afs/.../home/...)
don't > see why that aklog wouldn't work, but it's also fairly old.
It really shouldn't care, but you're running into such bizarre problems
at this point I can't even speculate as to what might be going on.
I was curious and installed openafs-krb5 on one machine, ran aklog in gdb
and did a stack trace after the segfault. It dies in krb5_get_host_realm()
in libkrb5.so.3. It happens krb5_get_host_realm() cannot handle an
*indented* comment within [domain_realm]! That is,

[domain_realm]
# foo
.tfy.utu.fi = TFY.UTU.FI

causes a SIGSEGV, while

[domain_realm]
# foo
.tfy.utu.fi = TFY.UTU.FI

does not. The funny thing is, Heimdal's verify_krb5.conf never complains
(about that!). Who's at fault now, Heimdal's verification engine (which
uses Heimdal's libkrb5.so.17, not the above libkrb5.so.3) or libkrb5.so.3?
In either case, someone will get a bug report tomorrow, I just wish I knew
whom to send it to. The easiest thing would be "reportbug libkrb53". =)
Actually, I was not able to (quickly) find any information on whether
comments in krb5.conf are supported at all! I suppose they are since
Debian's default krb5.conf ships with them. (Heimdal version, once again.)

I'll go back to checking the openafs-krb5 stuff now since aklog now works.
I would also appreciate any help on making aklog compile agains Heimdal,
but it seems like a bigger thing - there are so many things to tackle.
I think I'll bow out; you're trying to do things with Heimdal that I've
You'be been extremely helpful already. Thank you. It is not very common to
find people as helpful as you.
--
-----------------------------------------------
| Juha Jäykkä, ***@utu.fi |
| home: http://www.utu.fi/~juolja/ |
-----------------------------------------------
zeroguy
2006-01-11 22:26:43 UTC
Permalink
On Wed, 11 Jan 2006 00:45:38 -0800
Soon we'll have the OpenAFS aklog packaged for Debian.
Russ, just wondering, is there any specific time or a specific release
you're planning on packaging it (like, maybe 1.4.1?) Will it remain a
separate package?

-zeroguy
Russ Allbery
2006-01-11 23:00:28 UTC
Permalink
Post by zeroguy
Soon we'll have the OpenAFS aklog packaged for Debian.
Russ, just wondering, is there any specific time or a specific release
you're planning on packaging it (like, maybe 1.4.1?)
1.4.1, yes.
Post by zeroguy
Will it remain a separate package?
I'm not sure. My plan was to keep it a separate package initially,
although it may make sense down the road to just incorporate the binaries
from openafs-krb5 into openafs-client and have openafs-krb5 be a
transitional package for a release.

With the 1.4.1 release, my plan is to:

* Eliminate openafs-krb5 as a separate source package in favor of just
building a binary package with the same contents from the openafs
source.

* Build a libafssetpag shared library from the openafs source.

* Redo libpam-openafs-session to call lsetpag from libafssetpag to create
a PAG and then run aklog without the -setpag flag. That should let
people use it with afslog or the like if they want.
--
Russ Allbery (***@stanford.edu) <http://www.eyrie.org/~eagle/>
Juha Jäykkä
2006-01-11 11:19:57 UTC
Permalink
Post by Russ Allbery
Post by Juha Jäykkä
I would have thought pam_krb5.so [1] does this by itself, but
It's only a PAM module for Kerberos. It doesn't know anything about AFS.
I disagree. From its README:

o tokens
Create a new AFS PAG and obtain AFS tokens during the authentication
phase. By default, tokens are obtained for the local cell (and the cell
which contains the user's home directory, if they're not the same).

Except that, like I said earlier, it seems to rather inherit the PAG from
its parent. Perhaps I need to investigate it further. Or perhaps it fails
to create the PAG when doing ssh/GSSAPI.

Anyway, apart from pam_unix.so, my PAM config has nothing but I still get
my AFS tokens at login, no matter how I authenticate (GSSAPI or Heimdal
passwords - using ssh/pubkey of course does not give me the tokens).
Post by Russ Allbery
libpam-openafs-session in Debian. There are others floating around as
well.
Which depends on aklog, which segfaults.
Post by Russ Allbery
AFS doesn't do AES, at all. If you do have a DES key for AFS, I don't
see why that aklog wouldn't work, but it's also fairly old. Soon we'll
Well, I do and it does not. I suppose it does not like even the user
having an AES key.
Post by Russ Allbery
for putting your parent process in a PAG (and for right now
libpam-openafs-session even relies on it), but it's not the default.
Is this again due to the differences between MIT and Heimdal that we need
to use an additional AFS module beside plain kerberos? Heimdal kinit does
everything I need, except the PAG. Or does it do the PAG too well?

These happen in an xterm:

~> kinit foo
***@REALM's Password:
~> touch /afs/something/you/dont/have/write/permission/to
touch: cannot touch `<the path above>': Permission denied
~> xterm -e 'kinit foo/admin'
[type the password in the other xterm]
~> touch /afs/something/you/dont/have/write/permission/to
~> ls /afs/something/you/dont/have/write/permission/to
/afs/something/you/dont/have/write/permission/to
~>

I.e. kinit replaced the token of the parent xterm. Actually, it replaces
all the tokens of all the processes in the same X session. If I have a ssh
session to localhost, its tokens remain unaltered but all processes
running under the same X get replaced. [I am a little imprecise here: I
only tested processes running as children of the same window manager, but
I suppose it makes no difference if I used the xsm to open a new xterm
instead of the wm.]
Post by Russ Allbery
Post by Juha Jäykkä
it's the old RedHat pam_krb5.so emerged with the sf.net version and
with still active development unlike any other pam_krb5.so I can find.
The Red Hat Kerberos PAM module scares me. The PAM module in Debian is
under active development with a different upstream and handles some
things better (and will handle quite a few more things better when I
find time to get the next version uploaded).
Except that it does not work. =) Well, it works, but it really is just
kerberos, no AFS. It needs libpam-openafs-session to go with it and THAT
does not work. I'd go for Debian's pam_krb5 and pam_openafs_session any
time, if they worked.

I am all for "one program does on thing" ideology (i.e. pam_krb5.so does
kerberos and pam_openafs_session does afs instead of a monolithic
pam_krb5.so doing both), except that there seems to be no working
combination of these two for Heimdal 0.7.1 and user AES keys.

Suggestions?

I have one: there is such a thing as pam_afs2.so, which I found somewhere,
which can run arbitrary programs as part of PAM login process (at auth
stage, if I recall). It can do afslog (and it even comes with its own
afs5log of which I know nothing) instead of aklog if I wish, but I don't
know if it does PAG at all.

Cheers,
Juha

--
-----------------------------------------------
| Juha Jäykkä, ***@utu.fi |
| Laboratory of Theoretical Physics |
| Department of Physics, University of Turku |
| home: http://www.utu.fi/~juolja/ |
-----------------------------------------------
Douglas E. Engert
2006-01-12 20:05:08 UTC
Permalink
Post by Juha Jäykkä
Suggestions?
I have one: there is such a thing as pam_afs2.so, which I found somewhere,
which can run arbitrary programs as part of PAM login process (at auth
stage, if I recall). It can do afslog (and it even comes with its own
afs5log of which I know nothing) instead of aklog if I wish, but I don't
know if it does PAG at all.
The pam_afs2 is mine. It can get a PAG from any of the pam_sm_* entry points.
It does not have its own afs5log.

It and its friends can be found at ftp://achilles.ctd.anl.gov/pub/DEE

pam_afs2-0.1.tar
The pam module that will get a PAG using a syscall, then fork/exec
some program to get a token. It passes the pam_env to the program and
runs it as the user. The exec'ed program could be the OpenAFS aklog,
or the Heimdal afslog for example. (We have something local called ak5log,
that was around since DCE days that used K5 protocols as much as possible.)
We also have the gssklog, see below.

gafstoken-0.3.tar
The shared lib called by the pam_afs2 that has the syscall to get the PAG,
and the code to do the fork/exec (It compiles and links with out any AFS
or Kerberos headers or libs.) It does have some knowledge of what syscall
to use on what system.) On machines with the MIT daemons like ftp, klogin,
kshd a local mod uses this as well.

gssklog-0.11.tar
This is an alternative to aklog, that uses gssapi to authenticate to
one of the gssklogd daemons running on the afs database servers. It then
returns a token protected by the gss_wrap. It use the same set of parameters
as aklog, so can be forked/exce'ed by the gafstoken called from the pam_afs2.


The design goals of all of this was to keep AFS as far away from Kerberos
as possible, and never have to rely on a vendor's daemon to have to link
(even dynamically via pam) with either and especially with both.

The gssapi used in gssklog does not even have to be Kerberos! It was originally
designed for use with the Globus GSI gssapi. (But that is another story.)

For example on Solaris 10, we are using the Solaris sshd, Solaris Kerberos,
and Solaris pam_krb5. The pam_afs2 gets called, with the KRB5CCNAME set,
and this gets passed during the fork/exec of the gssklog that is using the Solaris
gssapi. I even got the OpenAFS aklog to link and run with the Solaris Kerberos.
and can use that instead of the gssklog. ( There is no MIT or Heimdal Kerberos
on these machines, other then what the AFS kernel has built in.)
Post by Juha Jäykkä
Cheers,
Juha
--
Douglas E. Engert <***@anl.gov>
Argonne National Laboratory
9700 South Cass Avenue
Argonne, Illinois 60439
(630) 252-5444
Douglas E. Engert
2006-01-13 17:07:51 UTC
Permalink
Post by Douglas E. Engert
It and its friends can be found at ftp://achilles.ctd.anl.gov/pub/DEE
pam_afs2-0.1.tar
gafstoken-0.3.tar
gssklog-0.11.tar
This is the beast I was referring to. I'm sorry I was too lazy to check
who created it and properly credit you.
Post by Douglas E. Engert
The design goals of all of this was to keep AFS as far away from
Kerberos as possible, and never have to rely on a vendor's daemon to
have to link (even dynamically via pam) with either and especially with
both.
I agree that this is the cleanest way to go. And it is in good old unix
philosophy that one thing does one thing - "ls" does not remove files and
"rm" does not list them. =)
I never tried it, though, except for debugging other pam modules (I used
it to all a little shell script). The reason was that it did not appear
have any signs of activity since a year ago and I shun inactive upstreams.
Obviously, I was wrong. Perhaps you just made such a damn good module that
it needs no further development? =) Testing pam_afs2 and friends is now
back on my todo list...
Perhaps someone would like to start maintaining a Debian package of these
three? I'd do that myself but I lack the status of a DD (that could be
corrected, though).
I would like to see the OpenAFS people pick this up and distribute the pam_afs2
or its equivalent with OpenAFS, as it is only used by AFS. The discussions
on the list lately are headed this way.
Post by Douglas E. Engert
The gssapi used in gssklog does not even have to be Kerberos! It was
originally designed for use with the Globus GSI gssapi. (But that is
It might be a very important story for us, since we participate in two
http://www.csc.fi/proj/mgrid/) which both use Globus.
I used to be on the Globus project, but not any more. The gatekeeker
was setup to be able to fork/exec the gssklog. There is a gatekeeper
patch in with it too. You could run the gssklog for the GLobus uses
while still using Keerberos for your normal users.
Post by Douglas E. Engert
gssklog. ( There is no MIT or Heimdal Kerberos on these machines, other
then what the AFS kernel has built in.)
This sounds like the way to go: separate OpenAFS and Kerberos
*implementations* as well as possible. The fact that AFS uses Kerberos
internally means this is a little more complicated an issue than just not
linking with kerberos, but still you're obviously going the right way.
Cheers,
Juha
--
Douglas E. Engert <***@anl.gov>
Argonne National Laboratory
9700 South Cass Avenue
Argonne, Illinois 60439
(630) 252-5444
Juha Jäykkä
2006-01-13 19:00:06 UTC
Permalink
Post by Douglas E. Engert
I would like to see the OpenAFS people pick this up and distribute the
pam_afs2 or its equivalent with OpenAFS, as it is only used by AFS. The
discussions on the list lately are headed this way.
I support that idea. It is the only pam module which does things the Right
Way(tm). I did some testing with OpenSSH 4.2, PAM and OpenAFS today (the
whole day, actually) and here is what I found out:

RedHat's pam_krb5.so

Will leak tokens (not create a PAG) when authenticating with pubkey
Gets tokens when given kerberos password
Does not get tokens when given the password pam_unix.so uses
Gets tokens when authenticating with gssapi
All this works no matter how sshd is configured


Debian's pam_krb5.so (where does this originate from?)

Will leak tokens (not create a PAG) when authenticating with pubkey
Does not get tokens when given the password pam_unix.so uses
Gets tokens when authenticating with gssapi
All this works no matter how sshd is configured

Debian's pam_krb5.so also gets the tokens when authenticating using
kerberos password IF AND ONLY IF the following sshd config variables have
the following values:

PasswordAuthentication yes
ChallengeResponseAuthentication no
UsePrivilegeSeparation no


BOTH these modules need Douglas's pam_afs2.so to make sure someone creates
the PAG. Otherwise things get messy, like noted in earlier posts by
various people.

Does pam_afs2.so *always* create the PAG? I am a little worried it does
not, there are various ways in the code to "goto err" which bypasses the
call to libgafstoken, which sets the pag. Would it be possible to add a
check: if pam_afs2.so detects (available) AFS tokens, it would create the
new PAG no matter what? (No one should call pam_afs2.so twice anyway, so
there should be no fear of creating a new PAG over one we created
previously.)


Also, with RedHat's pam_krb5.so one can change the ticket lifetimes to
something different than the realm default. With Debian's this is not
possible (at least there is nothing about it in the docs).
Post by Douglas E. Engert
I used to be on the Globus project, but not any more. The gatekeeker
was setup to be able to fork/exec the gssklog. There is a gatekeeper
patch in with it too. You could run the gssklog for the GLobus uses
while still using Keerberos for your normal users.
This sounds very nice. I'll look into this after this AFS thing is
finished.

Cheers,
Juha
--
-----------------------------------------------
| Juha Jäykkä, ***@utu.fi |
| home: http://www.utu.fi/~juolja/ |
-----------------------------------------------
Douglas E. Engert
2006-01-13 19:32:35 UTC
Permalink
Post by Juha Jäykkä
Post by Douglas E. Engert
I would like to see the OpenAFS people pick this up and distribute the
pam_afs2 or its equivalent with OpenAFS, as it is only used by AFS. The
discussions on the list lately are headed this way.
I support that idea. It is the only pam module which does things the Right
Way(tm). I did some testing with OpenSSH 4.2, PAM and OpenAFS today (the
RedHat's pam_krb5.so
Will leak tokens (not create a PAG) when authenticating with pubkey
Gets tokens when given kerberos password
Does not get tokens when given the password pam_unix.so uses
Gets tokens when authenticating with gssapi
All this works no matter how sshd is configured
Debian's pam_krb5.so (where does this originate from?)
Will leak tokens (not create a PAG) when authenticating with pubkey
Does not get tokens when given the password pam_unix.so uses
Gets tokens when authenticating with gssapi
All this works no matter how sshd is configured
Debian's pam_krb5.so also gets the tokens when authenticating using
kerberos password IF AND ONLY IF the following sshd config variables have
PasswordAuthentication yes
ChallengeResponseAuthentication no
UsePrivilegeSeparation no
BOTH these modules need Douglas's pam_afs2.so to make sure someone creates
the PAG. Otherwise things get messy, like noted in earlier posts by
various people.
Does pam_afs2.so *always* create the PAG?
Yes, unless you passed in the nopag option. Usefull for xlock or xscreensaver
to reuse the curent PAG. Tell the pam_krb5 to reuse the ticket cache at the
same time.
Post by Juha Jäykkä
I am a little worried it does
not, there are various ways in the code to "goto err" which bypasses the
call to libgafstoken, which sets the pag. Would it be possible to add a
check: if pam_afs2.so detects (available) AFS tokens, it would create the
new PAG no matter what?
Not really. pam_afs2 does not detect if there is a PAG already, or if
there are any tokens. Its does not have any AFS code in it, only the syscall
fork and exec.

(No one should call pam_afs2.so twice anyway, so
Post by Juha Jäykkä
there should be no fear of creating a new PAG over one we created
previously.)
Also, with RedHat's pam_krb5.so one can change the ticket lifetimes to
something different than the realm default. With Debian's this is not
possible (at least there is nothing about it in the docs).
Post by Douglas E. Engert
I used to be on the Globus project, but not any more. The gatekeeker
was setup to be able to fork/exec the gssklog. There is a gatekeeper
patch in with it too. You could run the gssklog for the GLobus uses
while still using Keerberos for your normal users.
This sounds very nice. I'll look into this after this AFS thing is
finished.
Cheers,
Juha
--
Douglas E. Engert <***@anl.gov>
Argonne National Laboratory
9700 South Cass Avenue
Argonne, Illinois 60439
(630) 252-5444
Russ Allbery
2006-01-13 21:22:47 UTC
Permalink
Post by Juha Jäykkä
Debian's pam_krb5.so (where does this originate from?)
http://www.squishy.cc/software/pam-krb5/
Post by Juha Jäykkä
Will leak tokens (not create a PAG) when authenticating with pubkey
This isn't a problem that pam-krb5 should be solving; instead, it should
be dealt with in a session module like libpam-openafs-session. The latter
is currently not working quite the way that one would want; it should
always create a PAG regardless of whether it can obtain tokens or not.
This is something I plan on looking at when this is all overhauled with
the 1.4.1 packages; since that's going to require substantial work, I
don't really want to go through all of the testing twice.
Post by Juha Jäykkä
Does not get tokens when given the password pam_unix.so uses
Well, yeah. :)
Post by Juha Jäykkä
Gets tokens when authenticating with gssapi
All this works no matter how sshd is configured
Debian's pam_krb5.so also gets the tokens when authenticating using
kerberos password IF AND ONLY IF the following sshd config variables have
PasswordAuthentication yes
ChallengeResponseAuthentication no
UsePrivilegeSeparation no
Are you using the version in unstable or the version in stable?

The version in unstable works fine with privilege separation. It still
doesn't work with ChallengeResponseAuthentication, but the version in
Subversion does; I need to double-check with Sam about uploading it. It
has a workaround for OpenSSH's bizarre way of calling PAM which breaks the
Debian PAM mini-policy; Sam wasn't particularly happy about working around
this, but I think it's a necessary evil for the time being.
Post by Juha Jäykkä
BOTH these modules need Douglas's pam_afs2.so to make sure someone
creates the PAG. Otherwise things get messy, like noted in earlier posts
by various people.
I expect that for the 1.4.1 package set I'll either fix
libpam-openafs-session or just replace it with pam_afs2.so, depending on
which looks easier to maintain.
Post by Juha Jäykkä
Also, with RedHat's pam_krb5.so one can change the ticket lifetimes to
something different than the realm default. With Debian's this is not
possible (at least there is nothing about it in the docs).
It's not currently possible, no. Why would you want to do this, out of
curiosity?

I'm hoping that the new profile library support in upcoming versions of
MIT Kerberos will make handling some things like this much easier,
although I'm not sure I understand the use case of this particular
feature.
--
Russ Allbery (***@stanford.edu) <http://www.eyrie.org/~eagle/>
Juha Jäykkä
2006-01-13 06:42:21 UTC
Permalink
Post by Douglas E. Engert
It and its friends can be found at ftp://achilles.ctd.anl.gov/pub/DEE
pam_afs2-0.1.tar
gafstoken-0.3.tar
gssklog-0.11.tar
This is the beast I was referring to. I'm sorry I was too lazy to check
who created it and properly credit you.
Post by Douglas E. Engert
The design goals of all of this was to keep AFS as far away from
Kerberos as possible, and never have to rely on a vendor's daemon to
have to link (even dynamically via pam) with either and especially with
both.
I agree that this is the cleanest way to go. And it is in good old unix
philosophy that one thing does one thing - "ls" does not remove files and
"rm" does not list them. =)

I never tried it, though, except for debugging other pam modules (I used
it to all a little shell script). The reason was that it did not appear
have any signs of activity since a year ago and I shun inactive upstreams.
Obviously, I was wrong. Perhaps you just made such a damn good module that
it needs no further development? =) Testing pam_afs2 and friends is now
back on my todo list...

Perhaps someone would like to start maintaining a Debian package of these
three? I'd do that myself but I lack the status of a DD (that could be
corrected, though).
Post by Douglas E. Engert
The gssapi used in gssklog does not even have to be Kerberos! It was
originally designed for use with the Globus GSI gssapi. (But that is
It might be a very important story for us, since we participate in two
grid projects (NorduGrid: http://www.nordugrid.dk and M-grid:
http://www.csc.fi/proj/mgrid/) which both use Globus.
Post by Douglas E. Engert
gssklog. ( There is no MIT or Heimdal Kerberos on these machines, other
then what the AFS kernel has built in.)
This sounds like the way to go: separate OpenAFS and Kerberos
*implementations* as well as possible. The fact that AFS uses Kerberos
internally means this is a little more complicated an issue than just not
linking with kerberos, but still you're obviously going the right way.

Cheers,
Juha

--
-----------------------------------------------
| Juha Jäykkä, ***@utu.fi |
| Laboratory of Theoretical Physics |
| Department of Physics, University of Turku |
| home: http://www.utu.fi/~juolja/ |
-----------------------------------------------
Sergio Gelato
2006-01-12 22:42:24 UTC
Permalink
Post by Russ Allbery
Post by Juha Jäykkä
I would have thought pam_krb5.so [1] does this by itself, but
It's only a PAM module for Kerberos. It doesn't know anything about AFS.
I disagree.
Russ Allbery
2006-01-13 00:15:08 UTC
Permalink
If you're using privilege separation in OpenSSH, the setpag() that's
done in the authentication phase may not affect the user session (unless
they've managed to make that process a descendant of the one in which
the authentication takes place, or possibly unless the "multithreaded
sshd" hack is used). It's safer to setpag() in the session establishment
phase.
In fact, if you're using OpenSSH 4.2 and aren't building with the
(unsupported and strongly discouraged by upstream) threading hack, any
setpag() done in the authentication phase *definitely won't* affect the
user session. OpenSSH 4.2 spawns a child process to do the PAM calls.
(It's a stupid architecture that breaks all kinds of other things, but I'm
not guessing I'm going to get anywhere with that discussion.)

See Debian bug #342157.
--
Russ Allbery (***@stanford.edu) <http://www.eyrie.org/~eagle/>
Jeffrey Hutzelman
2006-01-13 02:27:29 UTC
Permalink
On Thursday, January 12, 2006 04:15:08 PM -0800 Russ Allbery
Post by Russ Allbery
In fact, if you're using OpenSSH 4.2 and aren't building with the
(unsupported and strongly discouraged by upstream) threading hack, any
setpag() done in the authentication phase *definitely won't* affect the
user session. OpenSSH 4.2 spawns a child process to do the PAM calls.
(It's a stupid architecture that breaks all kinds of other things, but I'm
not guessing I'm going to get anywhere with that discussion.)
It does break all kinds of things, and it is annoying.

However, they do it that way not as part of some misguided attempt at
"security", but because of the constraints imposed by the way their SSH
protocol parser interacts with keyboard-interactive. Fixing it would
require significant work, not to mention actually getting the fix accepted.

-- Jeff
Russ Allbery
2006-01-13 02:41:21 UTC
Permalink
Post by Jeffrey Hutzelman
However, they do it that way not as part of some misguided attempt at
"security", but because of the constraints imposed by the way their SSH
protocol parser interacts with keyboard-interactive. Fixing it would
require significant work, not to mention actually getting the fix accepted.
Could you give me more details on why that would be the case? It doesn't
intuitively make sense to me why proxying the PAM interaction through yet
another level of indirection would help. Some kind of a deadlock
situation where you don't know which source of input to wait for, perhaps?
--
Russ Allbery (***@stanford.edu) <http://www.eyrie.org/~eagle/>
Jeffrey Hutzelman
2006-01-13 22:06:10 UTC
Permalink
On Thursday, January 12, 2006 06:41:21 PM -0800 Russ Allbery
Post by Russ Allbery
Post by Jeffrey Hutzelman
However, they do it that way not as part of some misguided attempt at
"security", but because of the constraints imposed by the way their SSH
protocol parser interacts with keyboard-interactive. Fixing it would
require significant work, not to mention actually getting the fix accepted.
Could you give me more details on why that would be the case? It doesn't
intuitively make sense to me why proxying the PAM interaction through yet
another level of indirection would help. Some kind of a deadlock
situation where you don't know which source of input to wait for, perhaps?
Essentially, the issue is that OpenSSH's protocol dispatch engine calls a
handler for each SSH message received, and expects the handler to return so
it can go on waiting for the next message. PAM, on the other hand, wants
to call the application each time it wants to display a message or prompt
for input, and for the application to return with the result. So the
keyboard-interactive driver is stuck in the middle, trying to mediate
between two systems both of which want to be at the top of the call stack.

The way OpenSSH handles this is to run the pam_authenticate in a separate
process (or, with the unsupported "hask", in a separate thread), with the
two processes speaking a trivial protocol to each other. The PAM
conversation function sends messages and prompts up to the main sshd
process, and blocks until it gets a response; in the meantime, the sshd
returns to the message dispatcher, and sends incoming replies to the PAM
process.

Now, another approach would be to turn the PAM call stack "upside-down" by
having the conversation function return PAM_CONV_AGAIN, which _should_
result in the call to pam_authenticate returning PAM_INCOMPLETE. However,
that would be a fair bit of work, and who's to say if they'd take a patch?

-- Jeff
Sergio Gelato
2006-01-13 09:49:49 UTC
Permalink
As what comes to kinit, its not setting the pag is a surprise to me after
all the praise of Heimdal's supposedly good integration with AFS.
Sometimes you want to start a new PAG, and sometimes you want to add or
refresh credentials in your current PAG.

Actually, Heimdal kinit will start a new PAG when given an explicit
command to run; try
kinit <your-principal> id
and compare the PAG you get with that of the parent process.

I also like it that Heimdal's pagsh (kpagsh, in Debian) will generate
a new KRB5CCNAME, so that a subsequent kinit will not clobber the Kerberos
ccache of the parent process. OpenAFS's pagsh shouldn't (and doesn't) do
that since OpenAFS tries to be agnostic about where the tokens come from
(it doesn't have to be Kerberos 5).
Russ Allbery
2006-01-13 19:00:14 UTC
Permalink
I also like it that Heimdal's pagsh (kpagsh, in Debian) will generate a
new KRB5CCNAME, so that a subsequent kinit will not clobber the Kerberos
ccache of the parent process. OpenAFS's pagsh shouldn't (and doesn't) do
that since OpenAFS tries to be agnostic about where the tokens come from
(it doesn't have to be Kerberos 5).
Yeah, OpenAFS has a pagsh.krb that does this for the K4 KRBTKFILE, but
like most of the rest of the K4-only stuff, it's not installed in the
Debian packages. It might be worthwhile to create a simple pagsh.krb5
that does the same thing for Kerberos v5, just because changing ticket
cache names securely is a little tricky to do portably in shell.
--
Russ Allbery (***@stanford.edu) <http://www.eyrie.org/~eagle/>
Jeffrey Hutzelman
2006-01-13 22:08:17 UTC
Permalink
On Friday, January 13, 2006 11:00:14 AM -0800 Russ Allbery
Post by Russ Allbery
I also like it that Heimdal's pagsh (kpagsh, in Debian) will generate a
new KRB5CCNAME, so that a subsequent kinit will not clobber the Kerberos
ccache of the parent process. OpenAFS's pagsh shouldn't (and doesn't) do
that since OpenAFS tries to be agnostic about where the tokens come from
(it doesn't have to be Kerberos 5).
Yeah, OpenAFS has a pagsh.krb that does this for the K4 KRBTKFILE, but
like most of the rest of the K4-only stuff, it's not installed in the
Debian packages.
It does that because the *.krb utilities also maintain kerberos ticket
files; for example, klog.krb will leave you with a TGT that you can use for
other applications.

Those tools are deprecated, and IMHO a pagsh.krb5 would be inappropriate,
unless we plan on shipping a complete suite of tools that manage krb5
tickets, as we did for krb4.

-- Jeff
Russ Allbery
2006-01-14 00:37:52 UTC
Permalink
Post by Jeffrey Hutzelman
Those tools are deprecated, and IMHO a pagsh.krb5 would be
inappropriate, unless we plan on shipping a complete suite of tools that
manage krb5 tickets, as we did for krb4.
The problem is, pagsh.krb5 is a program that should alter both AFS and
Kerberos state. The Kerberos folks don't want it because of the AFS build
dependency, and the AFS folks don't want it because AFS doesn't manage
Kerberos ticket caches. *heh*. I suppose one could just write it as a
separate utility that's distributed as its own package.
--
Russ Allbery (***@stanford.edu) <http://www.eyrie.org/~eagle/>
Rainer Toebbicke
2006-01-16 08:48:18 UTC
Permalink
Post by Jeffrey Hutzelman
Those tools are deprecated, and IMHO a pagsh.krb5 would be
inappropriate, unless we plan on shipping a complete suite of tools that
manage krb5 tickets, as we did for krb4.
Guess what Heimdal's pagsh does?

I follows the do-what-I-mean principle, in this case a new PAG, a new
KFBTKFILE and KRB5CCNAME...
And since its kinit does not need any aklog or such all just plain
works as before except klog => kinit (and alas: no -pipe on klog which
could almost be considered a bug...)
--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Rainer Toebbicke
European Laboratory for Particle Physics(CERN) - Geneva, Switzerland
Phone: +41 22 767 8985 Fax: +41 22 767 7155
Jeffrey Hutzelman
2006-01-16 22:50:05 UTC
Permalink
On Monday, January 16, 2006 09:48:18 AM +0100 Rainer Toebbicke
Post by Rainer Toebbicke
Post by Jeffrey Hutzelman
Those tools are deprecated, and IMHO a pagsh.krb5 would be
inappropriate, unless we plan on shipping a complete suite of tools that
manage krb5 tickets, as we did for krb4.
Guess what Heimdal's pagsh does?
Heimdal is a set of krb5 tools. OpenAFS is not.

Juha Jäykkä
2006-01-13 07:05:09 UTC
Permalink
Red Hat's pam_krb5 is not alone in having this problem;
see Debian bug #264902.
Thanks. This looks like exactly what I experienced. I never tried logging
in as another user, though, but since my shell got in the same PAG as
sshd, I assume the other user logging in through the same sshd would end
up in the same PAG, too.
Is there a Debian bug number for this problem? I couldn't find it.
This got already resolved, it was a krb5.conf comment parsing bug in
libkrb5.so. It's Debian #314609, like Russ noted yesterday.

As what comes to kinit, its not setting the pag is a surprise to me after
all the praise of Heimdal's supposedly good integration with AFS. It's
less of a problem, though, since, like you said, it's a command line tool
and easy to script around with a shell script sitting before kinit in
PATH. It wastes a shell, but solves the problem.

Cheers,
Juha

--
-----------------------------------------------
| Juha Jäykkä, ***@utu.fi |
| Laboratory of Theoretical Physics |
| Department of Physics, University of Turku |
| home: http://www.utu.fi/~juolja/ |
-----------------------------------------------
Russ Allbery
2006-01-14 19:35:05 UTC
Permalink
Post by Juha Jäykkä
Red Hat's pam_krb5 is not alone in having this problem;
see Debian bug #264902.
Thanks. This looks like exactly what I experienced. I never tried logging
in as another user, though, but since my shell got in the same PAG as
sshd, I assume the other user logging in through the same sshd would end
up in the same PAG, too.
Yeah, currently you have to be careful to start sshd outside of a PAG when
using libpam-openafs-session (with, for instance, "echo /etc/init.d/sshd
start | at now" if you still have at installed despite its security track
record).
--
Russ Allbery (***@stanford.edu) <http://www.eyrie.org/~eagle/>
Juha Jäykkä
2006-01-14 22:50:43 UTC
Permalink
Post by Russ Allbery
Yeah, currently you have to be careful to start sshd outside of a PAG
when using libpam-openafs-session (with, for instance, "echo
Plus you need to make sure, your users end up in session-specific pag's.
While this is not strictly necessary, it's quite inconvenient to have two
(possibly unrelated) ssh sessions to the same host, that share a pag. Like
someone already mentioned, this leads to situation where session #1 doing
an unlog also unlogs session #2, which is probably not what people would
expect.

I got all this now with pam_afs2.so. It's really very nice.
Post by Russ Allbery
/etc/init.d/sshd start | at now" if you still have at installed despite
its security track record).
I don't have at and I already made the mistake of putting cron into my
shell's PAG, so I could not think of anything else except editing
/etc/inittab and running "telinit q". It works. =)

Cheers,
Juha
--
-----------------------------------------------
| Juha Jäykkä, ***@utu.fi |
| home: http://www.utu.fi/~juolja/ |
-----------------------------------------------
Douglas E. Engert
2006-01-11 22:01:20 UTC
Permalink
Post by Juha Jäykkä
Post by Russ Allbery
sshd won't leak this token to the user if your PAM setup is appropriate.
You have to make sure that the user is put into their own PAG as part of
the session initialization process, even if they don't get a token.
I would have thought pam_krb5.so [1] does this by itself, but apparently I
am mistaken (again).
Not really. pam_krb5 is for Kerberos. PAGs are for AFS. Kerberos is much
more widely used then AFS so many pam_krb5 routines don't know anything
about AFS, or PAGs. But some do, so look for a pam_krb5afs.so
Post by Juha Jäykkä
While it would be relatively easy to write a small
pam module to handle the creation of a suitable PAG, I must wonder whether
one exists already?
Yes, pam_afs2 can be called after a pam_krb5 to get a PAG, and fork/exec
a aklog, ak5log, afslogin or gssklog to get the tokens.

See ftp://achilles.ctd.anl.gov/pub/DEE/pam_afs2-0.1.tar
Post by Juha Jäykkä
Anything that depends on aklog from openafs-krb5 will
not do since it just segfaults (probably the AES keys again, but I did not
test this point).
By the way, is Heimdal's kinit/afslog at fault here for not creating the
proper PAG? It's very convenient to have kinit do all the tricks, but if
it does them wrong...
Post by Russ Allbery
Ah! Thank you for saying! I never would have guessed that, and now
I'll know for the future.
You're welcome.
Cheers,
Juha
it looks like it's the old RedHat pam_krb5.so emerged with the sf.net
version and with still active development unlike any other pam_krb5.so I
can find.
--
Douglas E. Engert <***@anl.gov>
Argonne National Laboratory
9700 South Cass Avenue
Argonne, Illinois 60439
(630) 252-5444
Juha Jäykkä
2006-01-11 22:08:50 UTC
Permalink
Post by Douglas E. Engert
about AFS, or PAGs. But some do, so look for a pam_krb5afs.so
I think that pam_krb5afs.so no longer exists, at least the README of
RedHat's pam_krb5.so says

This is a major rewrite of pam_krb5afs. Call it 2.0, for lack of a better term.

o Compared to the earlier releases, this tree builds a single module which
"knows" how to do everything which is knowable at compile-time.

RedHat's pam_krb5.so *was* the source of pam_krb5afs.so - one source
anyway, the only one I am aware of. But it looks like it cannot do the
PAG's right. Has it ever done so?
Post by Douglas E. Engert
Yes, pam_afs2 can be called after a pam_krb5 to get a PAG, and fork/exec
a aklog, ak5log, afslogin or gssklog to get the tokens.
See ftp://achilles.ctd.anl.gov/pub/DEE/pam_afs2-0.1.tar
It has been very helpful in debugging since it can even exec a shell
script. I used that a lot to find out what's going wrong.

Cheers,
Juha
--
-----------------------------------------------
| Juha Jäykkä, ***@utu.fi |
| home: http://www.utu.fi/~juolja/ |
-----------------------------------------------
zeroguy
2006-01-13 02:50:19 UTC
Permalink
On Thu, 12 Jan 2006 00:08:50 +0200
I think that pam_krb5afs.so no longer exists, [...]
pam_krb5afs exists at least in Debian for Heimdal clients. I have a few
machines with Heimdal running it now, and it appears to work fine.

-zeroguy
zeroguy
2006-01-14 00:02:11 UTC
Permalink
On Fri, 13 Jan 2006 09:12:14 +0200
Post by zeroguy
I think that pam_krb5afs.so no longer exists, [...]
pam_krb5afs exists at least in Debian for Heimdal clients. I have a few
machines with Heimdal running it now, and it appears to work fine.
And in which package? I could not find it, but then again, I did not
look
inside all the 15000+ packages. =) I'd be very happy to see if that
works,
though I think it has the same problem as RedHat's pam_krb5 which is
evolved from the source of pam_krb5afs.
Oops, I lied. I have numerous Debian boxes which happen to have Heimdal
and pam_krb5afs on them, but I don't think they are in APT. See:

http://mailman.boxedpenguin.com/pipermail/debian-kerberos/2003-May/000751.html
https://lists.openafs.org/pipermail/openafs-info/2005-April/017538.html

Sorry about that.

-zeroguy
Douglas E. Engert
2006-01-04 20:19:56 UTC
Permalink
Post by Russ Allbery
Post by Douglas E. Engert
The sshd could accept a forwarded ticket for the sole purpose of using
it to get an AFS token so the sshd could access the .k5login file before
the krb5_kuserok was called (There might be some other dot files that
could also be accessed early.) Getting this ticket early does not
changed the security model, as the checking of the .k5login is to allow
access to the local machine, not the AFS file system. The forwarded
ticket and token could be discarded if the krb5_kuserok fails.
The client is, understandably, not going to forward the ticket until after
the authentication step is complete, so what this basically means is
authenticating the user, accepting the forwarded ticket, and then
reauthenticating the user. I guess it would be possible to do this, but
ew. I'm guessing ew would be the OpenSSH upstream reaction too.
Its part of the GSSAPI exchange, to get the forwarded ticket and is done
before the krb5_kuserok is called outside of gssapi.
Post by Russ Allbery
And this doesn't help with the PAM situation, where you don't get an AFS
token until after pam_setcred is called, which is after pam_authenticate,
and some programs only call pam_authenticate and never call the other PAM
functions. This is probably wrong of them, but still, it shouldn't
introduce a security hole.
I know pam is a mess and aplications don't call it correctly.
Post by Russ Allbery
I suppose you could fall back on the standard PAM cheat of doing
everything in pam_authenticate and making everything else a no-op, but
that too breaks in other situations where people call pam_authenticate in
a different context than pam_setcred (OpenSSH is again at fault).
I don't see a good solution to this, unfortunately. I wish that AFS
supported the directory lookup semantics supported in Unix with execute
but no read, but I can see why that would be rather hard to do.
Not sure if that would even help. The point I would like is that the .k5login
is only readable if I as a user permit it. i.e. by me forwarding a ticket to
some machinhe so it can read it or by me adding the host on to the ACL of the
directoty.
--
Douglas E. Engert <***@anl.gov>
Argonne National Laboratory
9700 South Cass Avenue
Argonne, Illinois 60439
(630) 252-5444
Sergio Gelato
2006-01-04 22:18:24 UTC
Permalink
Post by Douglas E. Engert
Post by Russ Allbery
Post by Douglas E. Engert
The sshd could accept a forwarded ticket for the sole purpose of using
it to get an AFS token so the sshd could access the .k5login file before
the krb5_kuserok was called (There might be some other dot files that
could also be accessed early.) Getting this ticket early does not
changed the security model, as the checking of the .k5login is to allow
access to the local machine, not the AFS file system. The forwarded
ticket and token could be discarded if the krb5_kuserok fails.
If I remember correctly, you've been advocating the removal of explicit
AFS support code from OpenSSH in favour of relying on PAM to obtain the
AFS tokens. (Vendors shouldn't be required to ship AFS-aware sshd's,
or something to that effect.) Exactly where in the PAM stack do you want
to obtain (and, if need be, discard) this extra token? pam_open_session()
is clearly too late for this, which is a pity.

Is there even a single PAM call between the end of gss_accept_sec_context()
and the call to ssh_gssapi_userok() ? I guess not.
Post by Douglas E. Engert
Post by Russ Allbery
The client is, understandably, not going to forward the ticket until after
the authentication step is complete, so what this basically means is
authenticating the user, accepting the forwarded ticket, and then
reauthenticating the user. I guess it would be possible to do this, but
ew. I'm guessing ew would be the OpenSSH upstream reaction too.
Its part of the GSSAPI exchange, to get the forwarded ticket and is done
before the krb5_kuserok is called outside of gssapi.
True, reauthentication would not be necessary. OpenSSH upstream may
still balk at the additional #ifdef USE_AFS , though.
Post by Douglas E. Engert
Post by Russ Allbery
And this doesn't help with the PAM situation, where you don't get an AFS
token until after pam_setcred is called, which is after pam_authenticate,
and some programs only call pam_authenticate and never call the other PAM
functions. This is probably wrong of them, but still, it shouldn't
introduce a security hole.
This is straying off-topic, but I'd argue that the default behaviour of
a PAM module should still be the correct one: call krb5_kuserok() from
pam_sm_acct_mgmt() only. Then one can add options to work around bugs in
important applications. As far as the screensavers' not running the
account stack, I'd be more worried about what happens when a Kerberos
password has just expired than about krb5_kuserok() being skipped:
after all, the initial login must have run the account stack successfully.
Post by Douglas E. Engert
I know pam is a mess and aplications don't call it correctly.
Worse: there seems to be no consensus on how to call it correctly.
Post by Douglas E. Engert
Post by Russ Allbery
I don't see a good solution to this, unfortunately. I wish that AFS
supported the directory lookup semantics supported in Unix with execute
but no read, but I can see why that would be rather hard to do.
Would it buy you all that much? We're talking about well-known file
names here, it's easy to test for their existence one by one even
without the convenience of readdir().
Post by Douglas E. Engert
Not sure if that would even help. The point I would like is that the .k5login
is only readable if I as a user permit it. i.e. by me forwarding a ticket to
some machinhe so it can read it or by me adding the host on to the ACL of the
directoty.
I as a user may want to allow the host to read .k5login during
authentication but deny such access to other unprivileged users
of the same computer.

That seems to call for a host.hostname entry in PTS and a way for sshd
etc. to obtain an AFS token for that, discarding it (e.g. by changing PAGs)
before control is handed to the user... OK, I see that Jeffrey Altman
has beaten me by a few minutes.
Russ Allbery
2006-01-04 22:55:19 UTC
Permalink
Post by Sergio Gelato
This is straying off-topic, but I'd argue that the default behaviour of
a PAM module should still be the correct one: call krb5_kuserok() from
pam_sm_acct_mgmt() only. Then one can add options to work around bugs in
important applications.
The problem with doing this is that if you deploy a PAM module with this
configuration, the local system administrator that's using xlockmore and
has local accounts that happen to have the same username as a Kerberos
principal but are not the same person (an extremely common configuration;
there are hundreds of systems in this state around Stanford) immediately
gets a security hole. And they have to know to add weird options to the
pam_krb5 invocation in order to plug it.

I don't like defaulting to being insecure. :/
Post by Sergio Gelato
As far as the screensavers' not running the account stack, I'd be more
worried about what happens when a Kerberos password has just expired
than about krb5_kuserok() being skipped: after all, the initial login
must have run the account stack successfully.
The screen savers that I've looked at actually explicitly don't call the
account stack (or call it and ignore its return status) because they don't
want to lock out users with expired accounts.

Don't ask me; I don't write the applications, I just try to hack PAM
modules to work with them.

(The other pain in the ass are screen savers like xlockmore that also
don't call either pam_setcred or pam_open_session -- why are there two
APIs when most PAM modules appear to treat these as synonymous? -- which
means that they don't refresh Kerberos tickets and AFS tokens.)
Post by Sergio Gelato
Post by Russ Allbery
I don't see a good solution to this, unfortunately. I wish that AFS
supported the directory lookup semantics supported in Unix with execute
but no read, but I can see why that would be rather hard to do.
Would it buy you all that much? We're talking about well-known file
names here, it's easy to test for their existence one by one even
without the convenience of readdir().
I personally don't care that .k5login is world-readable. That's fine by
me. I just care that I have to make my home directory listable in order
to support it.

Of course, I personally don't actually do that; I use a trick that we've
been recommending to advanced AFS users for years at Stanford. My
"official" home directory is completely world-readable, but all that's in
it is the basic files required for authorization and a stub shell
initialization file that cd's to a subdirectory called home, sets HOME,
and then sources my real shell initialization files. Works like a charm
(at least once you've patched all the programs that don't honor $HOME in
the environment, but we did that years ago and submitted all the patches
back and for the most part everything works now).

Explaining that to the average user is hard, though.
Post by Sergio Gelato
That seems to call for a host.hostname entry in PTS and a way for sshd
etc. to obtain an AFS token for that, discarding it (e.g. by changing
PAGs) before control is handed to the user... OK, I see that Jeffrey
Altman has beaten me by a few minutes.
Yeah, Jeff's idea is an interesting one. It's a lot of PTS IDs, but I
guess that isn't really a scarce commodity.
--
Russ Allbery (***@stanford.edu) <http://www.eyrie.org/~eagle/>
Jeffrey Hutzelman
2006-01-04 23:24:11 UTC
Permalink
On Wednesday, January 04, 2006 02:55:19 PM -0800 Russ Allbery
Post by Russ Allbery
Post by Sergio Gelato
As far as the screensavers' not running the account stack, I'd be more
worried about what happens when a Kerberos password has just expired
than about krb5_kuserok() being skipped: after all, the initial login
must have run the account stack successfully.
The screen savers that I've looked at actually explicitly don't call the
account stack (or call it and ignore its return status) because they don't
want to lock out users with expired accounts.
Don't ask me; I don't write the applications, I just try to hack PAM
modules to work with them.
This is the right behavior. pam_acct_mgmt() is about "account management",
not all-purpose authorization checks. In particular, it is about deciding
whether this account is allowed to log in at all, not about whether a
particular authenticated entity is allowed to access that account. Such
decisions are _expected_ to be made in pam_authenticate.
Post by Russ Allbery
(The other pain in the ass are screen savers like xlockmore that also
don't call either pam_setcred or pam_open_session -- why are there two
APIs when most PAM modules appear to treat these as synonymous? -- which
means that they don't refresh Kerberos tickets and AFS tokens.)
Well, they shouldn't call pam_open_session, because they're not opening a
new session. There is an appropriate opcode to use with pam_setcred for
this, and I agree that applications that fail to do so are buggy. About
all we can do about it is submit patches and hope they clean up their act.

-- Jeff
Russ Allbery
2006-01-04 23:29:57 UTC
Permalink
Post by Jeffrey Hutzelman
Post by Russ Allbery
The screen savers that I've looked at actually explicitly don't call
the account stack (or call it and ignore its return status) because
they don't want to lock out users with expired accounts.
Don't ask me; I don't write the applications, I just try to hack PAM
modules to work with them.
This is the right behavior. pam_acct_mgmt() is about "account
management", not all-purpose authorization checks. In particular, it is
about deciding whether this account is allowed to log in at all, not
about whether a particular authenticated entity is allowed to access
that account. Such decisions are _expected_ to be made in
pam_authenticate.
Ah, okay, that's good to know. That makes me think that the current
Debian pam_krb5 implementation is correct here, and more correct than the
previous version.
Post by Jeffrey Hutzelman
Well, they shouldn't call pam_open_session, because they're not opening
a new session.
D'oh. Yes, of course.
Post by Jeffrey Hutzelman
There is an appropriate opcode to use with pam_setcred for this, and I
agree that applications that fail to do so are buggy. About all we can
do about it is submit patches and hope they clean up their act.
I've submitted a Debian bug against xlockmore, at least. (And against
xdm, which calls pam_setcred multiple times and discards the environment
settings the last time it's called. And against OpenSSH, which calls
pam_authenticate in a child process and doesn't preserve any pam_data
across to the pam_setcred function.)
--
Russ Allbery (***@stanford.edu) <http://www.eyrie.org/~eagle/>
Sergio Gelato
2006-01-05 00:05:19 UTC
Permalink
Post by Russ Allbery
Post by Sergio Gelato
As far as the screensavers' not running the account stack, I'd be more
worried about what happens when a Kerberos password has just expired
than about krb5_kuserok() being skipped: after all, the initial login
must have run the account stack successfully.
The screen savers that I've looked at actually explicitly don't call the
account stack (or call it and ignore its return status) because they don't
want to lock out users with expired accounts.
My understanding is that in the Kerberos case this is counterproductive:
a principal with an expired password will only be able to get a ticket
for the password-changing service, which the PAM application isn't in a
position to verify. Surely you don't want to make the authentication
succeed on KDC_ERR_KEY_EXPIRED ?

Screen savers clearly ought to call the account stack; it should be the
system administrator's job to configure that sensibly, according to site
policy. Maybe it's OK to use pam_permit, maybe not. And I see no reason
why a screen saver couldn't prompt the user for a password change.
(Well, not for a Kerberos password change anyway; I suppose that some other
authentication systems may require root privileges in order to change a
password and the screen saver may be running unprivileged.)
Post by Russ Allbery
Don't ask me; I don't write the applications, I just try to hack PAM
modules to work with them.
That's an unfortunate division of labour.
Post by Russ Allbery
Post by Sergio Gelato
That seems to call for a host.hostname entry in PTS and a way for sshd
etc. to obtain an AFS token for that, discarding it (e.g. by changing
PAGs) before control is handed to the user... OK, I see that Jeffrey
Altman has beaten me by a few minutes.
Yeah, Jeff's idea is an interesting one. It's a lot of PTS IDs, but I
guess that isn't really a scarce commodity.
Not much scarcer than IPv4 addresses.

The case of the unprivileged screen saver trying to krb5_kuserok() may
be tricky, though, especially if the session's original token has expired.
This may be a valid use for a module option to relax the krb5_kuserok()
check; other such uses have been discussed in the past, e.g. for
POP toasters with virtual accounts. (Which reminds me that Courier's
authlib also skips, or used to skip, the account stack.)
Jeffrey Hutzelman
2006-01-05 00:47:00 UTC
Permalink
On Thursday, January 05, 2006 01:05:19 AM +0100 Sergio Gelato
Post by Sergio Gelato
Post by Russ Allbery
Post by Sergio Gelato
As far as the screensavers' not running the account stack, I'd be more
worried about what happens when a Kerberos password has just expired
than about krb5_kuserok() being skipped: after all, the initial login
must have run the account stack successfully.
The screen savers that I've looked at actually explicitly don't call the
account stack (or call it and ignore its return status) because they
don't want to lock out users with expired accounts.
a principal with an expired password will only be able to get a ticket
for the password-changing service, which the PAM application isn't in a
position to verify. Surely you don't want to make the authentication
succeed on KDC_ERR_KEY_EXPIRED ?
Actually, for a full-service application that supports password-changing,
you need to do exactly that. The authentication step succeeds, and the
password-expired error is returned during account management, which tells
the application to try changing it.

For applications that don't support account management, you need to return
an error in the authentication phase, and the user simply will not be able
to use that application until they have corrected the expired password
problem.

I agree, this is suboptimal. Password expiration (as opposed to account
expiration) is an authentication problem, and should be handled at the
authentication stage. However, that's not now PAM works.
Post by Sergio Gelato
Screen savers clearly ought to call the account stack; it should be the
system administrator's job to configure that sensibly, according to site
policy. Maybe it's OK to use pam_permit, maybe not. And I see no reason
why a screen saver couldn't prompt the user for a password change.
(Well, not for a Kerberos password change anyway; I suppose that some
other authentication systems may require root privileges in order to
change a password and the screen saver may be running unprivileged.)
I'm afraid that is not "clearly" the right behavior.

A screen saver is not making access decisions. It doesn't grant or deny
access to the machine, and it's not responsible for deciding whether any
given user gets to start a session. Those decisions were made when the
user logged in. What the screen locker is doing is a considerably simpler
operation on behalf of the _user_, not the system; specifically, it is
preventing the terminal from being used without successful authentication.

Since the screen locker is not making access decisions on behalf of the
system, it's not necessary for it to call account management, and in fact
inappropriate to refuse to unlock on the basis of an account management
issue. I'm not going to say "clearly", because there is plenty of room for
argument here. I will point out that this philosophy seems to be
consistent with the behavior of existing PAM libraries and applications,
and that this list is not the most effective place to try to convince the
PAM community to adopt a new approach.


Sadly, this leaves is in a position where a screen locker can't handle a
password change, because to detect that one is needed it has to call
pam_acct_mgmt, and then it would have to decide what to do about results
other than success or password-expired. Failing on all such results isn't
the right thing to do, but neither is ignoring them.
Post by Sergio Gelato
Post by Russ Allbery
Don't ask me; I don't write the applications, I just try to hack PAM
modules to work with them.
That's an unfortunate division of labour.
Actually, it's highly desirable to have a modular architecture in which
applications, PAM modules, and the PAM framework can be and are maintained
by indpenedent entities. The unfortunate part is that the semantics of PAM
operations are not as well-defined as they could be, and even those parts
that are well-defined aren't as well-documented as they could be. Of
course, it also doesn't help that any number of PAM application authors
have managed to completely ignore even what documentation does exist.


-- Jeff
Douglas E. Engert
2006-01-04 23:31:15 UTC
Permalink
Post by Sergio Gelato
Post by Douglas E. Engert
Post by Russ Allbery
Post by Douglas E. Engert
The sshd could accept a forwarded ticket for the sole purpose of using
it to get an AFS token so the sshd could access the .k5login file before
the krb5_kuserok was called (There might be some other dot files that
could also be accessed early.) Getting this ticket early does not
changed the security model, as the checking of the .k5login is to allow
access to the local machine, not the AFS file system. The forwarded
ticket and token could be discarded if the krb5_kuserok fails.
If I remember correctly, you've been advocating the removal of explicit
AFS support code from OpenSSH in favour of relying on PAM to obtain the
AFS tokens. (Vendors shouldn't be required to ship AFS-aware sshd's,
or something to that effect.)
Yes I have been, I still believe this, and it appear to be working.
Solaris 10 ssh with their Kerberos for example. But sshd still has this
need to access .k5login early so we have lived with the symlink to a
seperate directory.

Its not just ssh that has this problem, it is any login deamon trying to
access the .k5login file, such as dtlogin.

Exactly where in the PAM stack do you want
Post by Sergio Gelato
to obtain (and, if need be, discard) this extra token? pam_open_session()
I am not sure. It has to be after the GSSAPI or pam_authenticate has a TGT
(which could have been via keyboard interactive or at the console) but before
the krb5_kuserok is called.

Jeff's approach of using the host principal to get a token for the host is a half
way approach that would keep the .k5login from having to be world readable, but
does require the host to be added to ACLs and/or groups. If that was implemented
we could use it as well.
Post by Sergio Gelato
is clearly too late for this, which is a pity.
Is there even a single PAM call between the end of gss_accept_sec_context()
and the call to ssh_gssapi_userok() ? I guess not.
Not sure, I have not looked. Only if a large part of the community was interested
in something like this, we could apporach the OpenSSH people with a proposal,
It should be generic enough to work for other file systems like NFSv4.

(A related problem is that a deamon using gss always does authentication and always
need to authiorization, but there was not gss_authz routines to use.) pam_sm_acct_mgmt
could be it.)
Post by Sergio Gelato
Post by Douglas E. Engert
Post by Russ Allbery
The client is, understandably, not going to forward the ticket until after
the authentication step is complete, so what this basically means is
authenticating the user, accepting the forwarded ticket, and then
reauthenticating the user. I guess it would be possible to do this, but
ew. I'm guessing ew would be the OpenSSH upstream reaction too.
Its part of the GSSAPI exchange, to get the forwarded ticket and is done
before the krb5_kuserok is called outside of gssapi.
True, reauthentication would not be necessary. OpenSSH upstream may
still balk at the additional #ifdef USE_AFS , though.
Post by Douglas E. Engert
Post by Russ Allbery
And this doesn't help with the PAM situation, where you don't get an AFS
token until after pam_setcred is called, which is after pam_authenticate,
and some programs only call pam_authenticate and never call the other PAM
functions. This is probably wrong of them, but still, it shouldn't
introduce a security hole.
This is straying off-topic, but I'd argue that the default behaviour of
a PAM module should still be the correct one: call krb5_kuserok() from
pam_sm_acct_mgmt() only. Then one can add options to work around bugs in
important applications. As far as the screensavers' not running the
account stack, I'd be more worried about what happens when a Kerberos
after all, the initial login must have run the account stack successfully.
Post by Douglas E. Engert
I know pam is a mess and aplications don't call it correctly.
Worse: there seems to be no consensus on how to call it correctly.
Yes...
Post by Sergio Gelato
Post by Douglas E. Engert
Post by Russ Allbery
I don't see a good solution to this, unfortunately. I wish that AFS
supported the directory lookup semantics supported in Unix with execute
but no read, but I can see why that would be rather hard to do.
Would it buy you all that much? We're talking about well-known file
names here, it's easy to test for their existence one by one even
without the convenience of readdir().
Post by Douglas E. Engert
Not sure if that would even help. The point I would like is that the .k5login
is only readable if I as a user permit it. i.e. by me forwarding a ticket to
some machinhe so it can read it or by me adding the host on to the ACL of the
directoty.
I as a user may want to allow the host to read .k5login during
authentication but deny such access to other unprivileged users
of the same computer.
That seems to call for a host.hostname entry in PTS and a way for sshd
etc. to obtain an AFS token for that, discarding it (e.g. by changing PAGs)
before control is handed to the user... OK, I see that Jeffrey Altman
has beaten me by a few minutes.
_______________________________________________
OpenAFS-info mailing list
https://lists.openafs.org/mailman/listinfo/openafs-info
--
Douglas E. Engert <***@anl.gov>
Argonne National Laboratory
9700 South Cass Avenue
Argonne, Illinois 60439
(630) 252-5444
Jeffrey Hutzelman
2006-01-05 00:26:07 UTC
Permalink
On Wednesday, January 04, 2006 05:31:15 PM -0600 "Douglas E. Engert"
Post by Sergio Gelato
Exactly where in the PAM stack do you want
Post by Sergio Gelato
to obtain (and, if need be, discard) this extra token? pam_open_session()
I am not sure. It has to be after the GSSAPI or pam_authenticate has a TGT
(which could have been via keyboard interactive or at the console) but before
the krb5_kuserok is called.
You don't want to use the forwarded ticket for this, or one that was
obtained by pam_authenticate. Those are tickets for which the client
(read: attacker) knows the session key, and thus can spoof the server and
give you back a false .klogin file.

Access to the klogin file must in some way be protected by a key which the
potential attacker does not and cannot know.


In AFS, the intended mechanism for this is a feature of rxgk that will
allow the cache manager to combine users' credentials with its own in such
a way that traffic is protected by a key which is tied to the user's
identity but is known only to the cache manager. One side effect of the
technique used to achieve this is that the cache manager will be able to
protect _all_ communication with the fileserver, even when done on behalf
of unauthenticated users.

Another is that, if we choose to implement the feature, the fileserver will
be able to make access control decisions based on the client host's
identity as well as that of the user. The effect would be similar to the
behavior of IP-address ACL's today, except that it would behave reasonably
consistently and would actually be secure.

With these features, it is possible to access and trust a .klogin file
without requiring the user's credentials, if users and/or administrators
are willing to set permissive enough ACL's. Personally, I fall into that
camp -- I don't mind if .k5login contents are world-readable.


Another feature I'd like to see added to the ptserver is the ability to map
a large set of Kerberos principals (either enumerated or based on a
pattern) to a single PTS entry. I've been thinking about this for a long
time, though, and haven't yet come up with a way to represent it that
doesn't make the lookups grossly inefficient (something we cannot afford;
this is an operation the fileserver uses a lot, and its performance is a
real issue).

If we can ever find a way to provide that capability, it should allow the
camp that wants to keep klogin files secret a way to do so without creating
a pts entry for every host.


-- Jeffrey T. Hutzelman (N3NHS) <jhutz+@cmu.edu>
Sr. Research Systems Programmer
School of Computer Science - Research Computing Facility
Carnegie Mellon University - Pittsburgh, PA
Jim Rees
2006-01-04 20:20:59 UTC
Permalink
Any distributed file system has the same problem, if files in the home
directory need to be accessed during login. NFSv4 may have to address the
same problems.

The problem with afs is that you can't put an acl on a file. NFSv4 doesn't
have this problem.
Douglas E. Engert
2006-01-04 20:42:37 UTC
Permalink
Post by Douglas E. Engert
Any distributed file system has the same problem, if files in the home
directory need to be accessed during login. NFSv4 may have to address the
same problems.
The problem with afs is that you can't put an acl on a file. NFSv4 doesn't
have this problem.
The problem is not about ACLs on files or directories, it more about
allowing world readable access to what some might consider sensitive data.
I still would not like the .k5login world readable.

What I meant about NFS vs AFS is that both have to live in a unix world
where the system daemons are run as root, and unix code assumes root
automaticly has read access to the home directory in all cases. A protected
NFS home directory has the same problem as an AFS home directory.
Post by Douglas E. Engert
_______________________________________________
OpenAFS-info mailing list
https://lists.openafs.org/mailman/listinfo/openafs-info
--
Douglas E. Engert <***@anl.gov>
Argonne National Laboratory
9700 South Cass Avenue
Argonne, Illinois 60439
(630) 252-5444
Lester Barrows
2006-01-04 21:16:00 UTC
Permalink
Post by Douglas E. Engert
The problem is not about ACLs on files or directories, it more about
allowing world readable access to what some might consider sensitive data.
I still would not like the .k5login world readable.
What I meant about NFS vs AFS is that both have to live in a unix world
where the system daemons are run as root, and unix code assumes root
automaticly has read access to the home directory in all cases. A protected
NFS home directory has the same problem as an AFS home directory.
To a degree there is still an issue, but for the common case per-file ACLs
would be a big step forward. Eliminating world read access to the .k5login
while allowing some form of authentication purely to access it would seem to
involve more logic than per-file ACLs. How does the server know when to allow
access to just this file, and to whom? Per-file ACLs would probably be a good
starting point. Such files could then be specially flagged, such that the
server could recognize them as being used with the authorization system.

With AFS we have to decide whether to allow the world to read the entire top
level of a home directory, or to always require the username and password for
each login. At the moment I've chosen the latter, since the former requires
vigilance on the part of the user that I'm not comfortable with counting on.

Best regards,
Lester Barrows
Russ Allbery
2006-01-04 21:36:55 UTC
Permalink
Post by Lester Barrows
With AFS we have to decide whether to allow the world to read the entire
top level of a home directory, or to always require the username and
password for each login.
No, you only have to decide whether to allow the world to *list* the
entire top level of a home directory. Read is not required.
--
Russ Allbery (***@stanford.edu) <http://www.eyrie.org/~eagle/>
Lester Barrows
2006-01-04 22:36:19 UTC
Permalink
Post by Russ Allbery
No, you only have to decide whether to allow the world to *list* the
entire top level of a home directory. Read is not required.
That's still more privileges than we can give out based on our security
policy, since this is the default set of privileges each newly created
directory will be given.

Best regards,
Lester Barrows
Russ Allbery
2006-01-04 22:48:22 UTC
Permalink
Post by Lester Barrows
Post by Russ Allbery
No, you only have to decide whether to allow the world to *list* the
entire top level of a home directory. Read is not required.
That's still more privileges than we can give out based on our security
policy, since this is the default set of privileges each newly created
directory will be given.
I understand, and that's certainly a reasonable position to take. I just
don't want anyone else to get the wrong idea. List is very, very
different from read.
--
Russ Allbery (***@stanford.edu) <http://www.eyrie.org/~eagle/>
Jeffrey Altman
2006-01-04 21:48:42 UTC
Permalink
This should be a reasonable approach. For all machines that
are being logged into using gssapi krb5, those machines must
have been issued a Kerberos principal and they must have a
keytab. Assign the principal an AFS ID and then use a program
such as kstart to obtain and maintain a AFS token in the PAG
within which the sshd resides. Add the AFS ID to a an AFS
group and provide that group rl privileges on the top level
directory of the home volume. This will provide the sshd
the ability to read the directory without requiring that the
directory be world readable.

Now if you want to lock things down a bit more, move all of
the dot files to a new directory on which 'rl' is granted
for the group and instead give the group 'l' privilege on
the top-level directory and place symlinks from the top-level
directory to the real dot files. This will prevent sshd
from being able to read any of the files in the top-level
directory but it will be able to follow the symlinks to
read the dot files that it requires.

Jeffrey Altman
Lester Barrows
2006-01-04 22:59:10 UTC
Permalink
Post by Jeffrey Altman
This should be a reasonable approach. For all machines that
are being logged into using gssapi krb5, those machines must
have been issued a Kerberos principal and they must have a
keytab. Assign the principal an AFS ID and then use a program
such as kstart to obtain and maintain a AFS token in the PAG
within which the sshd resides. Add the AFS ID to a an AFS
group and provide that group rl privileges on the top level
directory of the home volume. This will provide the sshd
the ability to read the directory without requiring that the
directory be world readable.
Now if you want to lock things down a bit more, move all of
the dot files to a new directory on which 'rl' is granted
for the group and instead give the group 'l' privilege on
the top-level directory and place symlinks from the top-level
directory to the real dot files. This will prevent sshd
from being able to read any of the files in the top-level
directory but it will be able to follow the symlinks to
read the dot files that it requires.
Jeffrey Altman
This could work to a point, although it strikes me as being a hackish
workaround to a capability (well, granularity) that should be in the
filesystem to begin with. Authenticating the daemon makes sense. Having to
symlink all the authentication files into a separate directory to limit the
daemon's privileges is messy and prone to problems induced by user error.

It also gives the SSH daemon list privileges to each new directory a user
creates in the top level of their home directory, which they would then have
to change. If there's a compromise due to this additional privilege,
involving e.g. an SSH private key being created in or copied to the wrong
place, that could be bad.

Best regards,
Lester Barrows
Ken Hornstein
2006-01-04 21:30:37 UTC
Permalink
Post by Lester Barrows
With AFS we have to decide whether to allow the world to read the entire top
level of a home directory, or to always require the username and password for
each login. At the moment I've chosen the latter, since the former requires
vigilance on the part of the user that I'm not comfortable with counting on.
FWIW, we choose the exact opposite option (world readable home directory)
for the exact same reason (lack of confidence in the vigilance of users).

--Ken
Lester Barrows
2006-01-04 22:18:51 UTC
Permalink
Post by Ken Hornstein
FWIW, we choose the exact opposite option (world readable home directory)
for the exact same reason (lack of confidence in the vigilance of users).
--Ken
Most of our users will place files in their home directory, even in the top
level, expecting them to be secure. Additionally, I fully expect that most
users will leave permissions with the default settings. In this case, when a
user creates a directory it inherits the ACL privileges of its parent
directory. There is an expectation in our environment that content is secure
by default. That includes new directories not being world viewable. Depending
on your requirements of course, YMMV.

Best regards,
Lester Barrows
Jeffrey Hutzelman
2006-01-04 23:00:31 UTC
Permalink
On Wednesday, January 04, 2006 04:30:37 PM -0500 Ken Hornstein
Post by Ken Hornstein
Post by Lester Barrows
With AFS we have to decide whether to allow the world to read the entire
top level of a home directory, or to always require the username and
password for each login. At the moment I've chosen the latter, since
the former requires vigilance on the part of the user that I'm not
comfortable with counting on.
FWIW, we choose the exact opposite option (world readable home directory)
for the exact same reason (lack of confidence in the vigilance of users).
So did we, decades ago, and not just for AFS.

-- Jeff
Ken Hornstein
2006-01-05 15:32:44 UTC
Permalink
Post by Lester Barrows
Most of our users will place files in their home directory, even in the top
level, expecting them to be secure. Additionally, I fully expect that most
users will leave permissions with the default settings. In this case, when a
user creates a directory it inherits the ACL privileges of its parent
directory. There is an expectation in our environment that content is secure
by default. That includes new directories not being world viewable. Depending
on your requirements of course, YMMV.
Given the choice between files possibly being world-readable and users
having to expose their password for every login (even if you're
encrypting the session, we've learned the hard way that isn't enough
anymore), we decided to go with the former. As always, to each his or
her own.

--Ken
Lester Barrows
2006-01-05 20:30:53 UTC
Permalink
Post by Ken Hornstein
Given the choice between files possibly being world-readable and users
having to expose their password for every login (even if you're
encrypting the session, we've learned the hard way that isn't enough
anymore), we decided to go with the former. As always, to each his or
her own.
--Ken
This appears to be a security decision based primarily on a technical
limitation in AFS. The per-directory ACL limitation itself was more or less
what I was discussing, as it has caused me more than its share of headaches.
If I could place an ACL on a file and have it alone be readable/listable by
the authentication process, that would be ideal. It's great that a world
listable/readable top level home directory configuration works for your
environment's security requirements, and it certainly saves a bit of work. It
just isn't sufficient to comply with our security plans.

Best regards,
Lester Barrows
Douglas E. Engert
2006-01-05 21:11:47 UTC
Permalink
I have found this discussion very interesting, and it appears many sites
are living with the "l" symlinks for .k5login to a directory with "rl".
mostly because that is the simplest thing to do.

But after reading Jeff Hutzelman's note from earlier today, maybe the
problem is not with AFS but with Kerberos. Kerberos is relying on the system
for integrity protected access to a file system when it may not be protected.
Its not just an AFS problem, home directories in NFS may has the same problem.

Maybe there should be a way to turn off ~/.k5login and provide
the mapping in other ways. The auth_to_local in the krb5.conf is a start.
But looking at the code in kuserok.c in MIT and Heimdal they appear to check
for ~/.k5login before checking for auth_to_local in the krb5.conf.

Some other replacements to .k5login include the ANAME_DB code,
or having a k5login directory with all the user's .k5login files on
a local file system, or some NIS or ldap service.

We might all be better off if the admin of the server had control over the
.k5login, rather then the users.

Many of us may have all become to complacent with the use of the .k5login.
Post by Lester Barrows
Post by Ken Hornstein
Given the choice between files possibly being world-readable and users
having to expose their password for every login (even if you're
encrypting the session, we've learned the hard way that isn't enough
anymore), we decided to go with the former. As always, to each his or
her own.
--Ken
This appears to be a security decision based primarily on a technical
limitation in AFS. The per-directory ACL limitation itself was more or less
what I was discussing, as it has caused me more than its share of headaches.
If I could place an ACL on a file and have it alone be readable/listable by
the authentication process, that would be ideal. It's great that a world
listable/readable top level home directory configuration works for your
environment's security requirements, and it certainly saves a bit of work. It
just isn't sufficient to comply with our security plans.
Best regards,
Lester Barrows
_______________________________________________
OpenAFS-info mailing list
https://lists.openafs.org/mailman/listinfo/openafs-info
--
Douglas E. Engert <***@anl.gov>
Argonne National Laboratory
9700 South Cass Avenue
Argonne, Illinois 60439
(630) 252-5444
Rodney M Dyer
2006-01-05 21:21:52 UTC
Permalink
Post by Lester Barrows
This appears to be a security decision based primarily on a technical
limitation in AFS. The per-directory ACL limitation itself was more or less
what I was discussing, as it has caused me more than its share of headaches.
If I could place an ACL on a file and have it alone be readable/listable by
the authentication process, that would be ideal. It's great that a world
listable/readable top level home directory configuration works for your
environment's security requirements, and it certainly saves a bit of work. It
just isn't sufficient to comply with our security plans.
Wasn't there some talk about the DFS code being opened? And didn't DFS
have file level ACLs? Could any of that code be ported to AFS, or is there
already a project underway for file level ACLs in AFS?

Rodney

Rodney M. Dyer
Windows Systems Programmer
Mosaic Computing Group
William States Lee College of Engineering
University of North Carolina at Charlotte
Email: rmdyer_at_uncc.edu
Web: http://www.coe.uncc.edu/~rmdyer
Phone: (704)687-3518
Help Desk Line: (704)687-3150
FAX: (704)687-2352
Office: Cameron Applied Research Center, Room 232
Jeffrey Hutzelman
2006-01-06 03:43:12 UTC
Permalink
On Thursday, January 05, 2006 04:21:52 PM -0500 Rodney M Dyer
Post by Rodney M Dyer
Wasn't there some talk about the DFS code being opened? And didn't DFS
have file level ACLs? Could any of that code be ported to AFS, or is
there already a project underway for file level ACLs in AFS?
The AFS and DFS codebases are really not very similar.
So no, there's not really anything to be gained from DFS here.

I don't think I know of any current work to provide file-level ACL's in
AFS. Doing so would certainly require changes to the way the fileserver
stores per-file metadata, which means issues dealing with upgrades, and all
sorts of other fun. Obviously, this is something we'd prefer to do only
once.

There certainly have been some thoughts in the direction of extending the
fileserver's metadata format, but I would not expect any serious work in
that direction to happen until after several similar transitions earlier in
the queue, such as extensions to the PRDB format (to support mapping
authentication identities to AFS ID's), the AFS directory format (to
support unicode filenames and >64K files per directory), and possibly to
the VLDB (to support IPv6 and/or per-fileserver service keys).

-- Jeff
Jim Rees
2006-01-06 14:16:40 UTC
Permalink
authentication identities to AFS ID's), the AFS directory format (to
support unicode filenames and >64K files per directory),

Does the directory format have to change for unicode? I'm pretty sure it
will hold utf-8 with no changes. Other things would have to change of
course.
Jeffrey Altman
2006-01-06 15:31:10 UTC
Permalink
Post by Jeffrey Hutzelman
authentication identities to AFS ID's), the AFS directory format (to
support unicode filenames and >64K files per directory),
Does the directory format have to change for unicode? I'm pretty sure it
will hold utf-8 with no changes. Other things would have to change of
course.
Unfortunately, simply storing utf8 encoding has all the security and
usability issues that the web is currently experiencing. In order to
do this correctly, the client and server must refer to directory entries
by a normalized version of the name using a String Prep profile. One
side effect of the normalization process is that the resulting string
can not be displayed as the user intended. Therefore, for a Unicode
directory entry there must be two strings stored: the normalized string
that is used for directory searches and a display string that is the
string the user entered. The end result is that if there are N ways
of entering a particular filename, all N of them will find the same
file and all users will see the same display string regardless of what
they entered.

At the same time the directory format is extended to support Unicode
we should also make changes to support multiple data streams per file
and a method of supporting additional file attributes.

Jeffrey Altman
Jeffrey Hutzelman
2006-01-06 20:13:00 UTC
Permalink
On Friday, January 06, 2006 10:31:10 AM -0500 Jeffrey Altman
Post by Jeffrey Altman
At the same time the directory format is extended to support Unicode
we should also make changes to support multiple data streams per file
and a method of supporting additional file attributes.
Those are changes to the vnode metadata, not the directory structure.
We'll do them, but the directory changes have been designed and merely lack
implementation, whereas the vnode metadata changes will require changes in
the way the fileserver stores that data and new RPC's, neither of which
have been designed. The two sets of changes are orthogonal; I see no need
to do them "at the same time" or make support for one dependent on the
other.

Getting a directory format that would be backward compatible with existing
clients was a tricky bit of work, but I think we succeeded. Doing that for
vnode indexes will also be tricky. This will almost certainly require
another lengthy design session. But this is getting a bit off-topic for
-info...

-- Jeff
Rainer Toebbicke
2006-01-09 08:45:46 UTC
Permalink
Post by Jeffrey Hutzelman
Getting a directory format that would be backward compatible with
existing clients was a tricky bit of work, but I think we succeeded.
At what state? Source code or "design document"?

Support for directories with > 64k-odd entries? Support for directory
entries pointing to files in located in *other* volumes (and with what
type of ACL semantics)?
Post by Jeffrey Hutzelman
Doing that for vnode indexes will also be tricky. This will almost
certainly require another lengthy design session. But this is getting a
bit off-topic for -info...
--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Rainer Toebbicke
European Laboratory for Particle Physics(CERN) - Geneva, Switzerland
Phone: +41 22 767 8985 Fax: +41 22 767 7155
Jeffrey Hutzelman
2006-01-09 23:20:19 UTC
Permalink
On Monday, January 09, 2006 09:45:46 AM +0100 Rainer Toebbicke
Post by Rainer Toebbicke
Post by Jeffrey Hutzelman
Getting a directory format that would be backward compatible with
existing clients was a tricky bit of work, but I think we succeeded.
At what state? Source code or "design document"?
There is a design, on the hackathon web site at http://afsig.se/
I don't recall if anyone has begun implementation work; most of us have
been rather busy with other things.
Post by Rainer Toebbicke
Support for directories with > 64k-odd entries?
Yes
Post by Rainer Toebbicke
Support for directory
entries pointing to files in located in *other* volumes (and with what
type of ACL semantics)?
Huh? Why would we do that?
Directory entries describe files within a single directory.

-- Jeff
Rainer Toebbicke
2006-01-10 12:44:54 UTC
Permalink
Post by Jeffrey Hutzelman
Post by Rainer Toebbicke
Support for directory
entries pointing to files in located in *other* volumes (and with what
type of ACL semantics)?
Huh? Why would we do that?
Directory entries describe files within a single directory.
Well, you're redesigning so this is the time to ask why all files in a
directory should be in the same "volume"?

The term "volume" is today overloaded with
1. a [relative] position in the name space
2. the quota management
3. a physical location
4. a replication factor
5. a backup entity (if you use AFS backup)
6. a management unit, with a role e.g. in data placement.

That's a lot and makes AFS relatively cumbersome to manage. My last
attempt to 'vos move' a small(!) volume with 1 million files lasted
about 11 hours, and most of the time the volume wasn't writeable!

The client actually handles files (or vnodes), not volumes and most of
the time does not care about volumes. The lookup hard-smashes the
volume ID into the Fid and only takes the Vnode.Vunique out of the
directory. If the directory were *allowed* to contain the something
else as well, then the path were open for subsequently unloading the
volume paradigm.

[In the Apollo Domain file system every name in a directory resolved
to a UUID and I'm sure some still remember all the fancy things that
were thus possible - including screwing it up of course :-) ].

I'm just pleading for increased flexibility: currently too many ideas
to scale up AFS stop at a 20 year-old directory design which you'd
have to redesign as well.

So while you're at it...
--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Rainer Toebbicke
European Laboratory for Particle Physics(CERN) - Geneva, Switzerland
Phone: +41 22 767 8985 Fax: +41 22 767 7155
Jeffrey Hutzelman
2006-01-06 20:06:14 UTC
Permalink
Post by Jeffrey Hutzelman
authentication identities to AFS ID's), the AFS directory format (to
support unicode filenames and >64K files per directory),
Does the directory format have to change for unicode? I'm pretty sure it
will hold utf-8 with no changes. Other things would have to change of
course.
It does if we want to avoid requiring people who are already using
non-ASCII filenames to have a "filename flag day". Please read the 2005
hackathon notes on this topic.
Ken Hornstein
2006-01-06 15:36:41 UTC
Permalink
Post by Lester Barrows
This appears to be a security decision based primarily on a technical
limitation in AFS.
Sure was; I never said otherwise. I fully admit it's not ideal, but I
have to work with the tools that I have, not the ones that I wish I
had. Certainly everybody makes those kinds of decisions every day.

We've been working on long-term plans to make it possible to not have
the top-level directory world-readable; they haven't converged yet,
but I hope they will eventually.

--Ken
chas williams - CONTRACTOR
2006-01-06 15:50:42 UTC
Permalink
Post by Jeffrey Altman
we should also make changes to support multiple data streams per file
just curious. could you elaborate on this one a bit? are you talking
about versioning, "resource forks", or something else?
Jeffrey Altman
2006-01-06 17:18:16 UTC
Permalink
Post by chas williams - CONTRACTOR
Post by Jeffrey Altman
we should also make changes to support multiple data streams per file
just curious. could you elaborate on this one a bit? are you talking
about versioning, "resource forks", or something else?
Traditional stream based file systems bind a single data stream to a
single file name. In order to support arbitrary meta data, Microsoft
implemented multiple data streams in NTFS. For each file name there
is a default "unnamed" stream plus 0 or more named streams. These
streams are used to store MacOS Resource Forks, OS/2 Extended
Attributes, and Windows uses them to store source information for files
downloaded via Internet Explorer.

Moving, renaming, deleting a file affects all of the data streams.

Data streams are accessed in Windows using the ":" as a separator. If
"foo" is a filename that accesses the "unnamed" stream, then "foo:bar"
is a named stream associated with that file.

Jeffrey Altman
Jim Rees
2006-01-06 16:34:30 UTC
Permalink
Therefore, for a Unicode
directory entry there must be two strings stored: the normalized string
that is used for directory searches and a display string that is the
string the user entered.

Is there precedent for this? Do any other unicode based file systems do it
this way?
Jeffrey Altman
2006-01-06 17:09:14 UTC
Permalink
Post by Jeffrey Altman
Therefore, for a Unicode
directory entry there must be two strings stored: the normalized string
that is used for directory searches and a display string that is the
string the user entered.
Is there precedent for this? Do any other unicode based file systems do it
this way?
They do not but they really should.

Jeffrey Altman
Russ Allbery
2006-01-06 17:40:13 UTC
Permalink
Therefore, for a Unicode directory entry there must be two strings
stored: the normalized string that is used for directory searches and
a display string that is the string the user entered.
Is there precedent for this? Do any other unicode based file systems do
it this way?
Most other network protocols do StringPrep and the IETF actually requires
it now. It is definitely the right thing to do.
--
Russ Allbery (***@stanford.edu) <http://www.eyrie.org/~eagle/>
Continue reading on narkive:
Loading...