summaryrefslogtreecommitdiff
path: root/doc/rfc/rfc1588.txt
blob: 4f34b2a8f0841fc4fe52033aa67e14bfd200fc78 (plain) (blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
Network Working Group                                          J. Postel
Request for Comments: 1588                                   C. Anderson
Category: Informational                                              ISI
                                                           February 1994


                       WHITE PAGES MEETING REPORT



STATUS OF THIS MEMO

   This memo provides information for the Internet community.  This memo
   does not specify an Internet standard of any kind.  Distribution of
   this memo is unlimited.

INTRODUCTION

   This report describes the results of a meeting held at the November
   IETF (Internet Engineering Task Force) in Houston, TX, on November 2,
   1993, to discuss the future of and approaches to a white pages
   directory services for the Internet.

   As proposed to the National Science Foundation (NSF), USC/Information
   Sciences Institute (ISI) conducted the meeting to discuss the
   viability of the X.500 directory as a practical approach to providing
   white pages service for the Internet in the near future and to
   identify and discuss any alternatives.

   An electronic mail mailing list was organized and discussions were
   held via email for two weeks prior to the meeting.

1. EXECUTIVE SUMMARY

   This report is organized around four questions:

   1) What functions should a white pages directory perform?

      There are two functions the white pages service must provide:
      searching and retrieving.

      Searching is the ability to find people given some fuzzy
      information about them.  Such as "Find the Postel in southern
      California".  Searches may often return a list of matches.

      While the idea of indexing has been around for some time, such as
      the IN-ADDR tree in the Domain Name System (DNS), a new
      acknowledgment of its importance has emerged from these



Postel & Anderson                                               [Page 1]
^L
RFC 1588                   White Pages Report              February 1994


      discussions.  Users want fast searching across the distributed
      database on attributes different from the database structure.
      Pre-computed indices satisfy this desire, though only for
      specified searches.

      Retrieval is obtaining additional information associated with a
      person, such as an address, telephone number, email mailbox, or
      security certificate.

      Security certificates (a type of information associated with an
      individual) are essential for the use of end-to-end
      authentication, integrity, and privacy in Internet applications.
      The development of secure applications in the Internet is
      dependent on a directory system for retrieving the security
      certificate associated with an individual.  For example, the
      privacy enhanced electronic mail (PEM) system has been developed
      and is ready to go into service, and is now hindered by the lack
      of an easily used directory of security certificates.  An open
      question is whether or not such a directory needs to be internally
      secure.

   2) What approaches will provide us with a white pages directory?

      It is evident that there are and will be several technologies in
      use.  In order to provide a white pages directory service that
      accommodates multiple technologies, we should promote
      interoperation and work toward a specification of the simplest
      common communication form that is powerful enough to provide the
      necessary functionality.  This "common ground" approach aims to
      provide the ubiquitous WPS (White Pages Service) with a high
      functionality and a low entry cost.

   3) What are the problems to be overcome?

      It must be much easier to be part of the Internet white pages than
      to bring up a X.500 DSA (Directory Service Agent), yet we must
      make good use of the already deployed X.500 DSAs.  Simpler white
      pages services (such as Whois++) must be defined to promote
      multiple implementations.  To promote reliable operation, there
      must be some central management of the X.500 system.  A common
      naming scheme must be identified and documented.  A set of index-
      servers, and indexing techniques, must be developed.  The storage
      and retrieval of security certificates must be provided.








Postel & Anderson                                               [Page 2]
^L
RFC 1588                   White Pages Report              February 1994


   4) What should the deployment strategy be?

      Some central management must be provided, and easy to use user
      interfaces (such as the Gopher "gateway"), must be widely
      deployed.  The selection of a naming scheme must be documented.
      We should capitalize on the existing infrastructure of already
      deployed X.500 DSAs.  The "common ground" model should be adopted.
      A specification of the simplest common communication form must be
      developed.  Information about how to set up a new server (of
      whatever kind) in "cookbook" form should be made available.

   RECOMMENDATIONS

    1.  Adopt the common ground approach.  Encourage multiple client and
        server types, and the standardization of an interoperation
        protocol between them.  The clients may be simple clients,
        front-ends, "gateways", or embedded in other information access
        clients, such as Gopher or WWW (World Wide Web) client programs.
        The interoperation protocol will define message types, message
        sequences, and data fields.  An element of this protocol should
        be the use of Universal Record Locators (URLs).

    2.  Promote the development of index-servers.  The index-servers
        should use several different methods both for gathering data for
        their indices, and for searching their indices.

    3.  Support a central management for the X.500 system.  To get the
        best advantage of the effort already invested in the X.500
        directory system it is essential to provide the relatively small
        amount of central management necessary to keep the system
        functioning.

    4.  Support the development of security certificate storage and
        retrieval from the white pages service.  One practical approach
        is initially to focus on getting support from the existing X.500
        directory infrastructure.  This effort should also include
        design and development of the storage and retrieval of security
        certificates for other white pages services, such as Whois++.













Postel & Anderson                                               [Page 3]
^L
RFC 1588                   White Pages Report              February 1994


2. HISTORY

   In February 1989, a meeting on Internet white pages service was
   initiated by the FRICC (Federal Research Internet Coordinating
   Committee) and the ensuing discussions resulted in RFC 1107 [1] that
   offered some technical conclusions.  Widespread deployment was to
   have taken place by mid-1992.

         RFC 1107: K. Sollins, "Plan for Internet Directory Services",
         [1].

   Several other RFCs have been written suggesting deployment strategies
   and plans for an X.500 Directory Service.

   They are:

         RFC 1275: S. Hardcastle-Kille, "Replication Requirements to
         provide an Internet Directory using X.500", [2].

         RFC 1308: C. Weider, J. Reynolds, "Executive Introduction to
         Directory Services Using the X.500 Protocol", [3].

         RFC 1309: C. Weider, J. Reynolds, S. Heker, "Technical Overview
         of Directory Services Using the X.500 Protocol", [4].

         RFC 1430: S. Hardcastle-Kille, E. Huizer, V. Cerf, R. Hobby &
         S. Kent, "A Strategic Plan for Deploying an Internet X.500
         Directory Service", [5].

   Also, a current working draft submitted by A. Jurg of SURFnet
   entitled, "Introduction to White pages services based on X.500",
   describes why we need a global white pages service and why X.500 is
   the answer [6].

   The North America Directory Forum (NADF) also has done some useful
   work setting conventions for commercial providers of X.500 directory
   service.  Their series of memos is relevant to this discussion.  (See
   RFC 1417 for an overview of this note series [7].)  In particular,
   NADF standing document 5 (SD-5) "An X.500 Naming Scheme for National
   DIT Subtrees and its Application for c=CA and c=US" is of interest
   for its model of naming based on civil naming authorities [8].

   Deployment of a X.500 directory service including that under the PSI
   (Performance Systems International) White Pages Pilot Project and the
   PARADISE Project is significant, and continues to grow, albeit at a
   slower rate than the Internet.





Postel & Anderson                                               [Page 4]
^L
RFC 1588                   White Pages Report              February 1994


3. QUESTIONS

   Four questions were posed to the discussion list:

      1) What functions should a white pages directory perform?

      2) What approaches will provide us with a white pages directory?

      3) What are the problems to be overcome?

      4) What should the deployment strategy be?

3.A. WHAT FUNCTIONS SHOULD A WHITE PAGES DIRECTORY PERFORM?

   The basic function of a white pages service is to find people and
   information about people.

   In finding people, the service should work fast when searching for
   people by name, even if the information regarding location or
   organization is vague.  In finding information about people, the
   service should retrieve information associated with people, such as a
   phone number, a postal or email address, or even a certificate for
   security applications (authentication, integrity, and privacy).
   Sometimes additional information associated with people is provided
   by a directory service, such as a list of publications, a description
   of current projects, or a current travel itinerary.

   Back in 1989, RFC 1107 detailed 8 requirements of a white pages
   service: (1) functionality, (2) correctness of information, (3) size,
   (4) usage and query rate, (5) response time, (6) partitioned
   authority, (7) access control, (8) multiple transport protocol
   support; and 4 additional features that would make it more useful:
   (1) descriptive naming that could support a yellow pages service, (2)
   accountability, (3) multiple interfaces, and (4) multiple clients.

   Since the writing of RFC 1107, many additional functions have been
   identified.  A White Pages Functionality List is attached as Appendix
   1.  The problem is harder now, the Internet is much bigger, and there
   are many more options available (Whois++, Netfind, LDAP (Lightweight
   Direct Access Protocol), different versions of X.500 implementations,
   etc.)

   A white pages directory should be flexible, should have low resource
   requirements, and should fit into other systems that may be currently
   in use; it should not cost a lot, so that future transitions are not
   too costly; there should be the ability to migrate to something else,
   if a better solution becomes available; there should be a way to
   share local directory information with the Internet in a seamless



Postel & Anderson                                               [Page 5]
^L
RFC 1588                   White Pages Report              February 1994


   fashion and with little extra effort; the query responses should be
   reliable enough and consistent enough that automated tools could be
   used.

3.B. WHAT APPROACHES WILL PROVIDE US WITH A WHITE PAGES DIRECTORY?

   People have different needs, tastes, etc.  Consequently, a large part
   of the ultimate solution will include bridging among these various
   solutions.  Already we see a Gopher to X.500 gateway, a Whois++ to
   X.500 gateway, and the beginnings of a WWW to X.500 gateway.  Gopher
   can talk to CSO (a phonebook service developed by University of
   Illinois), WAIS (Wide Area Information Server), etc.  WWW can talk to
   everything.  Netfind knows about several other protocols.

   Gopher and WAIS "achieved orbit" simply by providing means for people
   to export and to access useful information; neither system had to
   provide ubiquitous service.  For white pages, if the service doesn't
   provide answers to specific user queries some reasonable proportion
   of the time, users view it as as failure.  One way to achieve a high
   hit rate in an exponentially growing Internet is to use a proactive
   data gathering architecture (e.g., as realized by archie and
   Netfind).  Important as they are, replication, authentication, etc.,
   are irrelevant if no one uses the service.

   There are pluses and minuses to a proactive data gathering method.
   On the plus side, one can build a large database quickly.  On the
   minus side, one can get garbage in the database.  One possibility is
   to use a proactive approach to (a) acquire data for administrative
   review before being added to the database, and/or (b) to check the
   data for consistency with the real world.  Additionally, there is
   some question about the legality of proactive methods in some
   countries.

   One solution is to combine existing technology and infrastructure to
   provide a good white pages service, based on a X.500 core plus a set
   of additional index/references servers.  DNS can be used to "refer"
   to the appropriate zone in the X.500 name space, using WAIS or
   Whois++, to build up indexes to the X.500 server which will be able
   to process a given request.  These can be index-servers or centroids
   or something new.

   Some X.500 purists might feel this approach muddles the connecting
   fabric among X.500 servers, since the site index, DNS records, and
   customization gateways are all outside of X.500.  On the other hand,
   making X.500 reachable from a common front-end would provide added
   incentive for sites to install X.500 servers.  Plus, it provides an
   immediate (if interim) solution to the need for a global site index
   in X.500.  Since the goal is to have a good white pages service,



Postel & Anderson                                               [Page 6]
^L
RFC 1588                   White Pages Report              February 1994


   X.500 purity is not essential.

   It may be that there are parts of the white pages problem that cannot
   be addressed without "complex technology".  A solution that allows
   the user to progress up the ladder of complexity, according to taste,
   perceived need, and available resources may be a much healthier
   approach.  However, experience to date with simpler solutions
   (Whois++, Netfind, archie) indicates that a good percentage of the
   problem of finding information can be addressed with simpler
   approaches.  Users know this and will resist attempts to make them
   pay the full price for the full solution when it is not needed.
   Whereas managers and funders may be concerned with the complexity of
   the technology, users are generally more concerned with the quality
   and ease of use of the service.  A danger in supporting a mix of
   technologies is that the service may become so variable that the
   loose constraints of weak service in some places lead users to see
   the whole system as too loose and weak.

   Some organizations will not operate services that they cannot get for
   free or they cannot try cheaply before investing time and money.
   Some people prefer a bare-bones, no support solution that only gives
   them 85 percent of what they want.  Paying for the service would not
   be a problem for many sites, once the value of the service has been
   proven.  Although there is no requirement to provide free software
   for everybody, we do need viable funding and support mechanisms.  A
   solution can not be simply dictated with any expectation that it will
   stick.

   Finally, are there viable alternative technologies to X.500 now or do
   we need to design something new?  What kind of time frame are we
   talking about for development and deployment?  And will the new
   technology be extensible enough to provide for the as yet unimagined
   uses that will be required of directory services 5 years from now?
   And will this directory service ultimately provide more capabilities
   than just white pages?

3.C. WHAT ARE THE PROBLEMS TO BE OVERCOME?

   There are two classes of problems to be examined; technology issues
   and infrastructure.

   TECHNOLOGY:

   How do we populate the database and make software easily available?

   Many people suggest that a public domain version of X.500 is
   necessary before a wide spread X.500 service is operational.  The
   current public domain version is said to be difficult to install and



Postel & Anderson                                               [Page 7]
^L
RFC 1588                   White Pages Report              February 1994


   to bring into operation, but many organizations have successfully
   installed it and have had their systems up and running for some time.
   Note that the current public domain program, quipu, is not quite
   standard X.500, and is more suited to research than production
   service.  Many people who tried earlier versions of quipu abandoned
   X.500 due to its costly start up time, and inherent complexity.

   The ISODE (ISO Development Environment) Consortium is currently
   developing newer features and is addressing most of the major
   problems.  However, there is the perception that the companies in the
   consortium have yet to turn these improvements into actual products,
   though the consortium says the companies have commercial off-the-
   shelf (COTS) products available now.  The improved products are
   certainly needed now, since if they are too late in being deployed,
   other solutions will be implemented in lieu of X.500.

   The remaining problem with an X.500 White Pages is having a high
   quality public domain DSA.  The ISODE Consortium will make its
   version available for no charge to Universities (or any non-profit or
   government organization whose primary purpose is research) but if
   that leaves a sizeable group using the old quipu implementation, then
   there is a significant problem.  In such a case, an answer may be for
   some funding to upgrade the public version of quipu.

   In addition, the quipu DSA should be simplified so that it is easy to
   use.  Tim Howes' new disk-based quipu DSA solves many of the memory
   problems in DSA resource utilization.  If one fixes the DSA resource
   utilization problem, makes it fairly easy to install, makes it freely
   available, and publishes a popular press book about it, X.500 may
   have a better chance of success.

   The client side of X.500 needs more work.  Many people would rather
   not expend the extra effort to get X.500 up.  X.500 takes a sharp
   learning curve.  There is a perception that the client side also
   needs a complex Directory User Interface (DUI) built on ISODE.  Yet
   there are alternative DUIs, such as those based on LDAP.  Another
   aspect of the client side is that access to the directory should be
   built into other applications like gopher and email (especially,
   accessing PEM X.509 certificates).

   We also need data conversion tools to make the transition between
   different systems possible.  For example, NASA had more than one
   system to convert.

   Searching abilities for X.500 need to be improved.  LDAP is great
   help, but the following capabilities are still needed:





Postel & Anderson                                               [Page 8]
^L
RFC 1588                   White Pages Report              February 1994


   -- commercial grade easily maintainable servers with back-end
      database support.

   -- clients that can do exhaustive search and/or cache useful
      information and use heuristics to narrow the search space in case
      of ill-formed queries.

   -- index servers that store index information on a "few" key
      attributes that DUIs can consult in narrowing the search space.
      How about index attributes at various levels in the tree that
      capture the information in the corresponding subtree?

   Work still needs to be done with Whois++ to see if it will scale to
   the level of X.500.

   An extended Netfind is attractive because it would work without any
   additional infrastructure changes (naming, common schema, etc.), or
   even the addition of any new protocols.

   INFRASTRUCTURE:

   The key issues are central management and naming rules.

   X.500 is not run as a service in the U.S., and therefore those using
   X.500 in the U.S. are not assured of the reliability of root servers.
   X.500 cannot be taken seriously until there is some central
   management and coordinated administration support in place.  Someone
   has to be responsible for maintaining the root; this effort is
   comparable to maintaining the root of the DNS.  PSI provided this
   service until the end of the FOX project [9]; should they receive
   funding to continue this?  Should this be a commercial enterprise?
   Or should this function be added to the duties of the InterNIC?

   New sites need assistance in getting their servers up and linked to a
   central server.

   There are two dimensions along which to consider the infrastructure:
   1) general purpose vs. specific, and 2) tight vs. loose information
   framework.

   General purpose leads to more complex protocols - the generality is
   an overhead, but gives the potential to provide a framework for a
   wide variety of services.  Special purpose protocols are simpler, but
   may lead to duplication or restricted scope.

   Tight information framework costs effort to coerce existing data and
   to build structures.  Once in place, it gives better managability and
   more uniform access.  The tight information framework can be



Postel & Anderson                                               [Page 9]
^L
RFC 1588                   White Pages Report              February 1994


   subdivided further into: 1) the naming approach, and 2) the object
   and attribute extensibility.

   Examples of systems placed in this space are: a) X.500 is a general
   purpose and tight information framework, b) DNS is a specific and
   tight information framework, c) there are various research efforts in
   the general purpose and loose information framework, and d) Whois++
   employs a specific and loose information framework.

   We need to look at which parts of this spectrum we need to provide
   services.  This may lead to concluding that several services are
   desirable.

3.D. WHAT SHOULD THE DEPLOYMENT STRATEGY BE?

   No solution will arise simply by providing technical specifications.
   The solution must fit the way the Internet adopts information
   technology.  The information systems that have gained real momentum
   in the Internet (WAIS, Gopher, etc.) followed the model:

   -- A small group goes off and builds a piece of software that
      supplies badly needed functionality at feasible effort to
      providers and users.

   -- The community rapidly adopts the system as a de facto standard.

   -- Many people join the developers in improving the system and
      standardizing the protocols.

   What can this report do to help make this happen for Internet white
   pages?

   Deployment Issues.

   -- A strict hierarchical layout is not suitable for all directory
      applications and hence we should not force fit it.

   -- A typical organization's hierarchical information itself is often
      proprietary; they may not want to divulge it to the outside world.

      It will always be true that Institutions (not just commercial)
      will always have some information that they do not wish to display
      to the public in any directory.  This is especially true for
      Institutions that want to protect themselves from headhunters, and
      sales personnel.






Postel & Anderson                                              [Page 10]
^L
RFC 1588                   White Pages Report              February 1994


   -- There is the problem of multiple directory service providers, but
      see NADF work on "Naming Links" and their "CAN/KAN" technology
      [7].

      A more general approach such as using a knowledge server (or a set
      of servers) might be better.  The knowledge servers would have to
      know about which server to contact for a given query and thus may
      refer to either service provider servers or directly to
      institution-operated servers.  The key problem is how to collect
      the knowledge and keep it up to date.  There are some questions
      about the viability of "naming links" without a protocol
      modification.

   -- Guidelines are needed for methods of searching and using directory
      information.

   -- A registration authority is needed to register names at various
      levels of the hierarchy to ensure uniqueness or adoption of the
      civil naming structure as delineated by the NADF.

   It is true that deployment of X.500 has not seen exponential growth
   as have other popular services on the Internet.  But rather than
   abandoning X.500 now, these efforts, which are attempting to address
   some of the causes, should continue to move forward.  Certainly
   installation complexity and performance problems with the quipu
   implementation need solutions.  These problems are being worked on.

   One concern with the X.500 service has been the lack of ubiquitous
   user agents.  Very few hosts run the ISODE package.  The use of LDAP
   improves this situation.  The X.500-gopher gateway has had the
   greatest impact on providing wide-spread access to the X.500 service.
   Since adding X.500 as a service on the ESnet Gopher, the use of the
   ESnet DSA has risen dramatically.

   Another serious problem affecting the deployment of X.500, at least
   in the U.S., is the minimal support given to building and maintaining
   the necessary infrastructure since the demise of the Fox Project [9].
   Without funding for this effort, X.500 may not stand a chance in the
   United States.












Postel & Anderson                                              [Page 11]
^L
RFC 1588                   White Pages Report              February 1994


4. REVIEW OF TECHNOLOGIES

   There are now many systems for finding information, some of these are
   oriented to white pages, some include white pages, and others
   currently ignore white pages.  In any case, it makes sense to review
   these systems to see how they might fit into the provision of an
   Internet white pages service.

4.A. X.500

   Several arguments in X.500's favor are its flexibility, distributed
   architecture, security, superiority to paper directories, and that it
   can be used by applications as well as by humans.  X.500 is designed
   to provide a uniform database facility with replication,
   modification, and authorization.  Because it is distributed, it is
   particularly suited for a large global White Pages directory.  In
   principle, it has good searching capabilities, allowing searches at
   any level or in any subtree of the DIT (Directory Information Tree).
   There are DUIs available for all types of workstations and X.500 is
   an international standard.  In theory, X.500 can provide vastly
   better directory service than other systems, however, in practice,
   X.500 is difficult, too complicated, and inconvenient to use.  It
   should provide a better service.  X.500 is a technology that may be
   used to provide a white pages service, although some features of
   X.500 may not be needed to provide just a white pages service.

   The are three reasons X.500 deployment has been slow, and these are
   largely the same reasons people don't like it:

   1) The available X.500 implementations (mostly quipu based on the
      ISODE) are very large and complicated software packages that are
      hard to work with.  This is partly because they solve the general
      X.500 problem, rather than the subset needed to provide an
      Internet white pages directory.  In practice, this means that a
      portion of the code/complexity is effectively unused.

      The LDAP work has virtually eliminated this concern on the client
      side of things, as LDAP is both simple and lightweight.  Yet, the
      complexity problem still exists on the server side of things, so
      people continue to have trouble bringing up data for simple
      clients to access.

      It has been suggested that the complexity in X.500 is due to the
      protocol stack and the ISODE base.  If this is true, then LDAP may
      be simple because it uses TCP directly without the ISODE base.  A
      version of X.500 server that took the same approach might also be
      "simple" or at least simpler.  Furthermore, the difficulty in
      getting an X.500 server up may be related to finding the data to



Postel & Anderson                                              [Page 12]
^L
RFC 1588                   White Pages Report              February 1994


      put in the server, and so may be a general data management problem
      rather than an X.500 specific problem.

      There is some evidence that eventually a large percentage of the
      use of directory services may be from applications rather than
      direct user queries.  For example, mail-user-agents exist that are
      X.500 capable with an integrated DUA (Directory User Agent).

   2) You have to "know a lot" to get a directory service up and running
      with X.500.  You have to know about object classes and attributes
      to get your data into X.500.  You have to get a distinguished name
      for your organization and come up with an internal tree structure.
      You have to contact someone before you can "come online" in the
      pilot.  It's not like gopher where you type "make", tell a few
      friends, and you're up and running.

      Note that a gopher server is not a white pages service, and as
      noted elsewhere in this report, there are a number of issues that
      apply to white pages service that are not addressed by gopher.

      Some of these problems could be alleviated by putting in place
      better procedures.  It should not any be harder to get connected
      to X.500 than it is to get connected to the DNS, for example.
      However, there is a certain amount of complexity that may be
      inherent in directory services.  Just compare Whois++ and X.500.
      X.500 has object classes.  Whois++ has templates.  X.500 has
      attributes.  Whois++ has fields.  X.500 has distinguished names.
      Whois++ has handles.

   3) Getting data to populate the directory, converting it into the
      proper form, and keeping it up-to-date turns out to be a hard
      problem.  Often this means talking to the administrative computing
      department at your organization.

      This problem exists regardless of the protocol used.  It should be
      easy to access this data through the protocol you're using, but
      that says more about implementations than it does about the
      protocol.  Of course, if the only X.500 implementation you have
      makes it really hard to do, and the Whois++ implementation you
      have makes it easy, it's hard for that not to reflect on the
      protocols.

   The fact that there are sites like University of Michigan, University
   of Minnesota, Rutgers University, NASA, LBL, etc. running X.500 in
   serious production mode shows that the problem has more to do with
   the current state of X.500 software procedures.  It takes a lot of
   effort to get it going.  The level of effort required to keep it
   going is relatively very small.



Postel & Anderson                                              [Page 13]
^L
RFC 1588                   White Pages Report              February 1994


   The yellow pages problem is not really a problem.  If you look at it
   in the traditional phonebook-style yellow pages way, then X.500 can
   do the job just like the phone book does.  Just organize the
   directory based on different (i.e., non-geographical) criteria.  If
   you want to "search everything", then you need to prune the search
   space.  To do this you can use the Whois++ centroids idea, or
   something similar.  But this idea is as applicable to X.500 as it is
   to Whois++.  Maybe X.500 can use the centroids idea most effectively.

   Additionally, it should be noted that there is not one single Yellow
   Pages service, but that according to the type of query there could be
   several such as querying by role, by location, by email address.

   No one is failing to run X.500 because they perceive it fails to
   solve the yellow pages problem.  The reasons are more likely one or
   more of the three above.

   X.500's extra complexity is paying off for University of Michigan.
   University of Michigan started with just people information in their
   tree.  Once that infrastructure was in place, it was easy for them to
   add more things to handle mailing lists/email groups, yellow pages
   applications like a documentation index, directory of images, etc.

   The ESnet community is using X.500 right now to provide a White Pages
   service; users succeed everyday in searching for information about
   colleagues given only a name and an organizational affiliation; and
   yes, they do load data into X.500 from an Oracle database.

   LBL finds X.500 very useful.  They can lookup DNS information, find
   what Zone a Macintosh is in, lookup departmental information, view
   the current weather satellite image, and lookup people information.

   LDAP should remove many of the complaints about X.500.  Implementing
   a number of LDAP clients is very easy and has all the functionality
   needed.  Perhaps DAP should be scrapped.

   Another approach is the interfacing of X.500 servers to WWW (the
   interface is sometimes called XWI).  Using the mosaic program from
   the NCSA, one can access X.500 data.

   INTERNET X.500

   The ISO/ITU may not make progress on improving X.500 in the time
   frame required for an Internet white pages service.  One approach is
   to have the Internet community (e.g., the IETF) take responsibility
   for developing a subset or profile of that part of X.500 it will use,
   and developing solutions for the ambiguous and undefined parts of
   X.500 that are necessary to provide a complete service.



Postel & Anderson                                              [Page 14]
^L
RFC 1588                   White Pages Report              February 1994


   Tasks this approach might include are:

   1. Internet (IETF) control of the base of the core service white
      pages infrastructure and standard.

   2. Base the standard on the 1993 specification, especially
      replication and access control.

   3. For early deployment choose which parts of the replication
      protocol are really urgently needed.  It may be possible to define
      a subset and to make it mandatory for the Internet.

   4. Define an easy and stable API (Application Program Interface) for
      key access protocols (DAP, LDAP).

   5. Use a standard knowledge model.

   6. Make sure that high performance implementations will exist for the
      most important servers, roles principally for the upper layers of
      the DSA tree.

   7. Make sure that servers will exist that will be able to efficiently
      get the objects (or better the attributes) from existing
      traditional databases for use at the leaves of the DSA tree.

4.B. WHOIS++

   The very first discussions of this protocol started in July 1992.  In
   less than 15 months there were 3 working public domain
   implementations, at least 3 more are on the way, and a Whois++
   front-end to X.500.  In addition, the developers who are working on
   the resource location system infrastructure (URL/URI) have committed
   to implementing it on top of Whois++ because of its superior search
   capabilities.

   Some of the main problems with getting a White Pages directory going
   have been: (1) search, (2) lack of public domain versions, (3)
   implementations are too large, (4) high start up cost, and (5) the
   implementations don't make a lot of sense for a local directory,
   particularly for small organizations.  Whois++ can and does address
   all these problems very nicely.

   Search is built into Whois++, and there is a strong commitment from
   the developers to keep this a high priority.







Postel & Anderson                                              [Page 15]
^L
RFC 1588                   White Pages Report              February 1994


   The protocols are simple enough that someone can write a server in 3
   days.  And people have done it.  If the protocols stay simple, it
   will always be easy for someone to whip out a new public domain
   server.  In this respect, Whois++ is much like WAIS or Gopher.

   The typical Whois++ implementation is about 10 megabytes, including
   the WAIS source code that provides the data engine.  Even assuming a
   rough doubling of the code as additional necessary functionality is
   built in, that's still quite reasonable, and compares favorably with
   the available implementations of X.500.  In addition, WAIS is disk-
   based from the start, and is optimized for local searching.  Thus,
   this requires only disk storage for the data and the indexes.  In a
   recent test, Chris Weider used a 5 megabyte source data file with the
   Whois++ code.  The indices came to about another 7 megabytes, and the
   code was under 10 megabytes.  The total is 22 megabytes for a Whois++
   server.

   The available Whois++ implementations take about 25 minutes to
   compile on a Sun SPARCstation IPC.  Indexing a 5 megabyte data file
   takes about another 20 minutes on an IPC.  Installation is very easy.
   In addition, since the Whois++ server protocol is designed to be only
   a front-end, organizations can keep their data in any form they want.

   Whois++ makes sense as a local directory service.  The
   implementations are small, install quickly, and the raw query
   language is very simple.  The simplicity of the interaction between
   the client and the server make it easy to experiment with and to
   write clients for, something that wasn't true of X.500 until LDAP.
   In addition, Whois++ can be run strictly as a local service, with
   integration into the global infrastructure done at any time.

   It is true that Whois++ is not yet a fully functional White Pages
   service.  It requires a lot of work before it will be so.  However,
   X.500 is not that much closer to the goal than Whois++ is.

   Work needs to be done on replication and authentication of data.  The
   current Whois++ system does not lend itself to delegation.  Research
   is still needed to improve the system and see if it scales well.













Postel & Anderson                                              [Page 16]
^L
RFC 1588                   White Pages Report              February 1994


4.C. NETFIND

   Right now, the white pages service with the most coverage in the
   Internet is Mike Schwartz' Netfind.  Netfind works in two stages: 1)
   find out where to ask, and 2) start asking.

   The first stage is based on a database of netnews articles, UUCP
   maps, NIC WHOIS databases, and DNS traversals, which then maps
   organizations and localities to domain names.  The second stage
   consists of finger queries, Whois queries, smtp expns and vrfys, and
   DNS lookups.

   The key feature of Netfind is that it is proactive.  It doesn't
   require that the system administrator bring up a new server, populate
   it with all kinds of information, keep the information in sync, worry
   about update, etc.  It just works.

   A suggestion was made that Netfind could be used as a way to populate
   the X.500 directory.  A tool might do a series of Netfind queries,
   making the corresponding X.500 entries as it progresses.
   Essentially, X.500 entries would be "discovered" as people look for
   them using Netfind.  Others do not believe this is feasible.

   Another perhaps less interesting merger of Netfind and X.500 is to
   have Netfind add X.500 as one of the places it looks to find
   organizations (and people).

   A search can lead you to where a person has an account (e.g.,
   law.xxx.edu) only to find a problem with the DNS services for that
   domain, or the finger service is unavailable, or the machines are not
   be running Unix (there are lots of VMS machines and IBM mainframes
   still out there).  In addition, there are security gateways.  The
   trends in computing are towards the use of powerful portables and
   mobile computing and hence Netfind's approach may not work.  However,
   Netfind proves to be an excellent yellow-pages service for domain
   information in DNS servers - given a set of keywords it lists a set
   of possible domain names.

   Suppose we store a pointer in DNS to a white-pages server for a
   domain.  We can use Netfind to come up with a list of servers to
   search, query these servers, then combine the responses.  However, we
   need a formal method of gathering white-pages data and informal
   methods will not work and may even get into legal problems.








Postel & Anderson                                              [Page 17]
^L
RFC 1588                   White Pages Report              February 1994


   The user search phase of Netfind is a short-term solution to
   providing an Internet white pages.  For the longer term, the
   applicability of the site discovery part of Netfind is more relevant,
   and more work has been put into that part of the system over the past
   2 years than into the user search phase.

   Given Netfind's "installed customer base" (25k queries per day, users
   in 4875 domains in 54 countries), one approach that might make sense
   is to use Netfind as a migration path to a better directory, and
   gradually phase Netfind's user search scheme out of existence.  The
   idea of putting a record in the DNS to point to the directory service
   to search at a site is a good start.

   One idea for further development is to have the DNS record point to a
   "customization" server that a site can install to tailor the way
   Netfind (or whatever replaces Netfind) searches their site.  This
   would provide sites a choice of degrees of effort and levels of
   service.  The least common denominator is what Netfind presently
   does: DNS/SMTP/finger.  A site could upgrade by installing a
   customization server that points to the best hosts to finger, or that
   says "we don't want Netfind to search here" (if people are
   sufficiently concerned about the legal/privacy issues, the default
   could be changed so that searches must be explicitly enabled).  The
   next step up is to use the customization server as a gateway to a
   local Whois, CSO, X.500, or home grown white pages server.  In the
   long run, if X.500 (or Whois++, etc.) really catches on, it could
   subsume the site indexing part of Netfind and use the above approach
   as an evolution path to full X.500 deployment.  However, other
   approaches may be more productive.  One key to Netfind's success has
   been not relying on organizations to do anything to support Netfind,
   however the customization server breaks this model.

   Netfind is very useful.  Users don't have to do anything to wherever
   they store their people data to have it "included" in Netfind.  But
   just like archie, it would be more useful if there were a more common
   structure to the information it gives you, and therefore to the
   information contained in the databases it accesses.  It's this common
   structure that we should be encouraging people to move toward.

   As a result of suggestions made at the November meeting, Netfind has
   been extended to make use of URL information stored in DNS records.
   Based on this mechanism, Netfind can now interoperate with X.500,
   WHOIS, and PH, and can also allow sites to tune which hosts Netfind
   uses for SMTP or Finger, or restrict Netfind from searching their
   site entirely.






Postel & Anderson                                              [Page 18]
^L
RFC 1588                   White Pages Report              February 1994


4.D. ARCHIE

   Archie is a success because it is a directory of files that are
   accessible over the network.  Every FTP site makes a "conscious"
   decision to make the files available for anonymous FTP over the
   network.  The mechanism that archie uses to gather the data is the
   same as that used to transfer the files.  Thus, the success rate is
   near 100%.  In a similar vein, if Internet sites make a "conscious"
   decision to make white-pages data available over the network, it is
   possible to link these servers to create a world-wide directory, such
   as X.500, or build an index that helps to isolate the servers to be
   searched, Whois++.  Users don't have to do anything to their FTP
   archives to have them included in archie.  But everybody recognizes
   that it could be more useful if only there were some more common
   structure to the information, and to the information contained in the
   archives.  Archie came after the anonymous FTP sites were in wide-
   spread use.  Unfortunately for white-pages, we are building tools,
   but there is no data.

4.E. FINGER

   The Finger program that allows one to get either information about an
   individual with an account, or a list of currently logged in users,
   from a host running the server, can be used to check a suggestion
   that a particular individual has an account on a particular host.
   This does not provide an efficient method to search for an
   individual.

4.F. GOPHER

   A "gateway" between Gopher and X.500 has been created so that one can
   examine X.500 data from a Gopher client.  Similar "gateways" are
   needed for other white pages systems.

4.G. WWW

   One extension to WWW would be an attribute type for the WWW URI/URL
   with the possibility for any client to request from the X.500 server
   (1) either the locator (thus the client would decide to access or not
   the actual data), or (2) for client not capable of accessing this
   data, the data itself (packed) in the ASN.1 encoded result.

   This would give access to potentially any piece of information
   available on the network through X.500, and in the white pages case
   to photos or voice messages for persons.






Postel & Anderson                                              [Page 19]
^L
RFC 1588                   White Pages Report              February 1994


   This solution is preferable to one consisting of storing this
   multimedia information directly in the directory, because it allows
   WWW capable DUIs to access directly any piece of data no matter how
   large.  This work on URIs is not WWW-specific.

5. ISSUES

5.A. DATA PROTECTION

   Outside of the U.S., nearly all developed countries have rather
   strict data protection acts (to ensure privacy mostly) that governs
   any database on personal data.

   It is mandatory for the people in charge of such white pages
   databases to have full control over the information that can be
   stored and retrieved in such a database, and to provide access
   controls over the information that is made available.

   If modification is allowed, then authentication is required.  The
   database manager must be able to prevent users from making available
   unallowed information.

   When we are dealing with personal records the issues are a little
   more involved than exporting files.  We can not allow trawling of
   data and we need access-controls so that several applications can use
   the directory and hence we need authentication.

   X.500 might have developed faster if security issues were not part of
   the implementation.  There is tension between quick lightweight
   implementations and the attempt to operate in a larger environment
   with business issues incorporated.  The initial belief was that data
   is owned by the people who put the data into the system, however,
   most data protection laws appoint the organizations holding the data
   responsible for the quality of the data of their individuals.
   Experience also shows that the people most affected by inaccurate
   data are the people who are trying to access the data.  These
   problems apply to all technologies.

5.B. STANDARDS

   Several types of standards are needed: (1) standards for
   interoperation between different white pages systems (e.g., X.500 and
   Whois++), (2) standards for naming conventions, and (3) and standards
   within the structured data of each system (what fields or attributes
   are required and optional, and what are their data types).






Postel & Anderson                                              [Page 20]
^L
RFC 1588                   White Pages Report              February 1994


   The standards for interoperation may be developed from the work now
   in progress on URLs, with some additional protocol developed to
   govern the types of messages and message sequences.

   Both the naming of the systems and the naming of individuals would
   benefit from consistent naming conventions.  The use of the NADF
   naming scheme should be considered.

   When structured data is exchanged, standards are needed for the data
   types and the structural organization.  In X.500, much effort has
   gone into the definition of various structures or schemas, and yet
   few standard schemas have emerged.

   There is a general consensus that a "cookbook" for Administrators
   would make X.500 implementation easier and more attractive.  These
   are essential for getting X.500 in wider use.  It is also essential
   that other technologies such as Whois++, Netfind, and archie also
   have complete user guides available.

5.C. SEARCHING AND RETRIEVING

   The main complaint, especially from those who enjoyed using a
   centralized database (such as the InterNIC Whois service), is the
   need to search for all the John Doe's in the world.  Given that the
   directory needs to be distributed, there is no way of answering this
   question without incurring additional cost.

   This is a problem with any distributed directory - you just can't
   search every leaf in the tree in any reasonable amount of time.  You
   need to provide some mechanism to limit the number of servers that
   need to be contacted.  The traditional way to handle this is with
   hierarchy.  This requires the searcher to have some idea of the
   structure of the directory.  It also comes up against one of the
   standard problems with hierarchical databases - if you need to search
   based on a characteristic that is NOT part of the hierarchy, you are
   back to searching every node in the tree, or you can search an index
   (see below).

   In general:

   -- the larger the directory the more need for a distributed solution
      (for upkeep and managability).

   -- once you are distributed, the search space for any given search
      MUST be limited.

   -- this makes it necessary to provide more information as part of the
      query (and thus makes the directory harder to use).



Postel & Anderson                                              [Page 21]
^L
RFC 1588                   White Pages Report              February 1994


   Any directory system can be used in a manner that makes searching
   less than easy.  With a User Friendly Name (UFN) query, a user can
   usually find an entry (presuming it exists) without a lot of trouble.
   Using additional listings (as per NADF SD-5) helps to hide geographic
   or civil naming infrastructure knowledge requirements.

   Search power is a function of DSA design in X.500, not a function of
   Distinguished Naming.  Search can be aided by addition in X.500 of
   non-distinguishing attributes, and by using the NADF Naming Scheme it
   is possible to lodge an entry anywhere in the DIT that you believe is
   where it will be looked for.

   One approach to the distributed search problem is to create another
   less distributed database to search, such as an index.  This is done
   by doing a (non-interactive) pre-search, and collecting the results
   in an index.  When a user wants to do a real time search, one first
   searches the index to find pointers to the appropriate data records
   in the distributed database.  One example of this is the building of
   centroids that contain index information.  There may be a class of
   servers that hold indices, called "index-servers".

5.D. INDEXING

   The suggestion for how to do fast searching is to do indexing.  That
   is to pre-compute an index of people from across the distributed
   database and hold that index in an index server.  When a user wants
   to search for someone, he first contacts the index-server.  The
   index-server searches its index data and returns a pointer (or a few
   pointers) to specific databases that hold data on people that match
   the search criteria.  Other systems which do something comparable to
   this are archie (for FTP file archives), WAIS, and Netfind.

5.E. COLLECTION AND MAINTENANCE

   The information must be "live" - that is, it must be used.  Often one
   way to ensure this is to use the data (perhaps locally) for something
   other than white pages.  If it isn't, most people won't bother to
   keep the information up to date.  The white pages in the phone book
   have the advantage that the local phone company is in contact with
   the listee monthly (through the billing system), and if the address
   is not up to date, bills don't get delivered, and there is feedback
   that the address is wrong.  There is even better contact for the
   phone number, since the local phone company must know that for their
   basic service to work properly.  It is this aspect of directory
   functionality that leads towards a distributed directory system for
   the Internet.





Postel & Anderson                                              [Page 22]
^L
RFC 1588                   White Pages Report              February 1994


   One approach is to use existing databases to supply the white pages
   data.  It then would be helpful to define a particular use of SQL
   (Structured Query Language) as a standard interface language between
   the databases and the X.500 DSA or other white pages server.  Then
   one needs either to have the directory service access the existing
   database using an interface language it already knows (e.g., SQL), or
   to have tools that periodically update the directory database from
   the existing database.  Some sort of "standard" query format (and
   protocol) for directory queries, with "standard" field names will be
   needed to make this work in general.  In a way, both X.500 and
   Whois++ provide this.  This approach implies customization at every
   existing database to interface to the "standard" query format.

   Some strongly believe that the white pages service needs to be
   created from the bottom up with each organization supplying and
   maintaining its own information, and that such information has to be
   the same -- or a portion of the same -- information the organization
   uses locally.  Otherwise the global information will be stale and
   incomplete.

   One way to make this work is to distribute software that:

      - is useful locally,

      - fits into the global scheme,

      - is available free, and

      - works on most Unix systems.

   With respect to privacy, it would be good for the local software to
   have controls that make it possible to put company sensitive
   information into the locally maintained directory and have only a
   portion of it exported for outsiders.

5.F. NAMING STRUCTURE

   We need a clear naming scheme capable of associating a name with
   attributes, without any possible ambiguities, that is stable over
   time, but also capable of coping with changes.  This scheme should
   have a clear idea of naming authorities and be able to store
   information required by authentication mechanisms (e.g., PEM or X.509
   certificates).

   The NADF is working to establish a National Public Directory Service,
   based on the use of existing Civil Naming Authorities to register
   entry owners' names, and to deal with the shared-entry problem with a
   shared public DIT supported by competing commercial service



Postel & Anderson                                              [Page 23]
^L
RFC 1588                   White Pages Report              February 1994


   providers.  At this point, we do not have any sense at the moment as
   to how [un]successful the NADF may be in accomplishing this.

   The NADF eventually concluded that the directory should be organized
   so entries can be found where people (or other entities) will look
   for them, not where civil naming authorities would place their
   archival name registration records.

   There are some incompatibilities between use of the NADF Naming
   Scheme, the White Pages Pilot Naming Scheme, and the PARADISE Naming
   Scheme.  This should be resolved.

5.G. CLAYMAN PROPOSAL

   RFC 1107 offered a "strawman" proposal for an Internet Directory
   Service.  The next step after strawman is sometimes called "clayman",
   and here a clayman proposal is presented.

   We assume only white pages service is to be provided, and we let
   sites run whatever access technologies they want to (with whatever
   access controls they feel comfortable).

   Then the architecture can be that the discovery process leads to a
   set of URLs.  A URL is like an address, but it is a typed address
   with identifiers, access method, not a protocol.  The client sorts
   the URLs and may discard some that it cannot deal with.  The client
   talks to "meaningful URLs" (such as Whois, Finger, X.500).

   This approach results in low entry cost for the servers that want to
   make information available, a Darwinian selection of access
   technologies, coalescence in the Internet marketplace, and a white
   pages service will tend toward homogeneity and ubiquity.

   Some issues for further study are what discovery technology to use
   (Netfind together with Whois++ including centroids?), how to handle
   non-standard URLs (one possible solution is to put server on top of
   these (non-standard URLs) which reevaluates the pointer and acts as a
   front-end to a database), which data model to use (Finger or X.500),
   and how to utilize a common discovery technology (e.g., centroids) in
   a multiprotocol communication architecture.

   The rationale for this meta-WPS approach is that it builds on current
   practices, while striving to provide a ubiquitous directory service.
   Since there are various efforts going on to develop WPS based on
   various different protocols, one can envisage a future with a meta-
   WPS that uses a combination of an intelligent user agent and a
   distributed indexing service to access the requested data from any
   available WPS.  The user perceived functionality of such a meta-WPS



Postel & Anderson                                              [Page 24]
^L
RFC 1588                   White Pages Report              February 1994


   will necessarily be restricted to the lowest common denominator.  One
   will hope that through "market" forces, the number of protocols used
   will decrease (or converge), and that the functionality will
   increase.

   The degree to which proactive data gathering is permitted may be
   limited by national laws.  It may be appropriate to gather data about
   which hosts have databases, but not about the data in those
   databases.

6. CONCLUSIONS

   We now revisit the questions we set out to answer and briefly
   describe the key conclusions.

6.A.  WHAT FUNCTIONS SHOULD A WHITE PAGES DIRECTORY PERFORM?

   After all the discussion we come to the conclusion that there are two
   functions the white pages service must provide: searching and
   retrieving.

   Searching is the ability to find people given some fuzzy information
   about them.  Such as "Find the Postel in southern California".
   Searches may often return a list of matches.

   The recognition of the importance of indexing in searching is a major
   conclusion of these discussions.  It is clear that users want fast
   searching across the distributed database on attributes different
   from the database structure.  It is possible that pre-computed
   indices can satisfy this desire.

   Retrieval is obtaining additional information associated with a
   person, such as address, telephone number, email mailbox, and
   security certificate.

   This last, security certificates, is a type of information associated
   with an individual that is essential for the use of end-to-end
   authentication, integrity, and privacy, in Internet applications.
   The development of secure application in the Internet is dependent on
   a directory system for retrieving the security certificate associated
   with an individual.  The PEM system has been developed and is ready
   to go into service, but is now held back by the lack of an easily
   used directory of security certificates.

   PEM security certificates are part of the X.509 standard.  If X.500
   is going to be set aside, then other alternatives need to be
   explored.  If X.500 distinguished naming is scrapped, some other
   structure will need to come into existence to replace it.



Postel & Anderson                                              [Page 25]
^L
RFC 1588                   White Pages Report              February 1994


6.B.  WHAT APPROACHES WILL PROVIDE US WITH A WHITE PAGES DIRECTORY?

   It is clear that there will be several technologies in use.  The
   approach must be to promote the interoperation of the multiple
   technologies.  This is traditionally done by having conventions or
   standards for the interfaces and communication forms between the
   different systems.  The need is for a specification of the simplest
   common communication form that is powerful enough to provide the
   necessary functionality.  This allows a variety of user interfaces on
   any number of client systems communicating with different types of
   servers.  The IETF working group (WG) method of developing standards
   seems well suited to this problem.

   This "common ground" approach aims to provide the ubiquitous WPS with
   a high functionality and a low entry cost.  This may done by singling
   out issues that are common for various competing WPS and coordinate
   work on these in specific and dedicated IETF WGs (e.g., data model
   coordination).  The IETF will continue development of X.500 and
   Whois++ as two separate entities.  The work on these two protocols
   will be broken down in various small and focussed WGs that address
   specific technical issues, using ideas from both X.500 and Whois++.
   The goal being to produce common standards for information formats,
   data model and access protocols.  Where possible the results of such
   a WG will be used in both Whois++ and X.500, although it is envisaged
   that several WGs may work on issues that remain specific to one of
   the protocols.  The IDS (Integrated Directory Services) WG continues
   to work on non-protocol specific issues.  To achieve coordination
   that leads to convergence rather than divergence, the applications
   area directorate will provide guidance to the Application Area
   Directors as well as to the various WGs, and the User Services Area
   Council (USAC) will provide the necessary user perspective.

6.C.  WHAT ARE THE PROBLEMS TO BE OVERCOME?

   There are several problems that can be solved to make progress
   towards a white pages service more rapid.  We need:

   To make it much easier to be part of the Internet white pages than
   bringing up a X.500 DSA, yet making good use of the already deployed
   X.500 DSAs.

   To define new simpler white pages services (such as Whois++) such
   that numerous people can create implementations.

   To provide some central management of the X.500 system to promote
   good operation.

   To select a naming scheme.



Postel & Anderson                                              [Page 26]
^L
RFC 1588                   White Pages Report              February 1994


   To develop a set of index-servers, and indexing techniques, to
   provide for fast searching.

   To provide for the storage and retrieval of security certificates.

6.D.  WHAT SHOULD THE DEPLOYMENT STRATEGY BE?

   We should capitalize on the existing infrastructure of already
   deployed X.500 DSAs.  This means that some central management must be
   provided, and easy to use user interfaces (such as the Gopher
   "gateway"), must be widely deployed.

   -- Document the selection of a naming scheme (e.g., the NADF scheme).

   -- Adopt the "common ground" model.  Encourage the development of
      several different services, with a goal of interworking between
      them.

   -- Develop a specification of the simplest common communication form
      that is powerful enough to provide the necessary functionality.
      The IETF working group method of developing standards seems well
      suited to this problem.

   -- Make available information about how to set up new servers (of
      what ever kind) in "cookbook" form.


























Postel & Anderson                                              [Page 27]
^L
RFC 1588                   White Pages Report              February 1994


7. SUMMARY

   While many issues have been raised, there are just a few where we
   recommend the action be taken to support specific elements of the
   overall white pages system.

   RECOMMENDATIONS

    1.  Adopt the common ground approach - give all protocols equal
        access to all data.  That is, encourage multiple client and
        server types, and the standardization of an interoperation
        protocol between them.  The clients may be simple clients,
        front-ends, "gateways", or embedded in other information access
        clients, such as Gopher or WWW client programs.  The
        interoperation protocol will define some message types, message
        sequences, and data fields.   An element of this protocol should
        be the use of URLs.

    2.  Promote the development of index-servers.  The index-servers
        should use several different methods of gathering data for their
        indices, and several different methods for searching their
        indices.

    3.  Support a central management for the X.500 system.  To get the
        best advantage of the effort already invested in the X.500
        directory system it is essential to provide the relatively small
        amount of central management necessary to keep the system
        functioning.

    4.  Support the development of security certificate storage and
        retrieval from the white pages service.  The most practical
        approach is to initially focus on getting this supported by the
        existing X.500 directory infrastructure.  It should also include
        design and development of the storage and retrieval of security
        certificates in other white pages services, such as Whois++.
















Postel & Anderson                                              [Page 28]
^L
RFC 1588                   White Pages Report              February 1994


8.  REFERENCES

   [1]  Sollins, K., "Plan for Internet Directory Services", RFC 1107,
        M.I.T. Laboratory for Computer Science, July 1989.

   [2]  Hardcastle-Kille, S., "Replication Requirements to provide an
        Internet Directory using X.500, RFC 1275, University College
        London, November 1991.

   [3]  Weider, C., and J. Reynolds, "Executive Introduction to
        Directory Services Using the X.500 Protocol", FYI 13, RFC 1308,
        ANS, USC/Information Sciences Institute, March 1992.

   [4]  Weider, C., Reynolds, J., and S. Heker, "Technical Overview of
        Directory Services Using the X.500 Protocol", FYI 14, RFC 1309,
        ANS, USC/Information Sciences Institute,, JvNC, March 1992.

   [5]  Hardcastle-Kille, S., Huizer, E., Cerf, V., Hobby, R., and S.
        Kent, "A Strategic Plan for Deploying an Internet X.500
        Directory Service", RFC 1430, ISODE Consortium, SURFnet bv,
        Corporation for National Research Initiatives, University of
        California, Davis, Bolt, Beranek, and Newman, February 1993.

   [6]  Jurg, A., "Introduction to White pages services based on X.500",
        Work in Progress, October 1993.

   [7]  The North American Directory Forum, "NADF Standing Documents: A
        Brief Overview", RFC 1417, The North American Directory Forum",
        NADF, February 1993.

   [8]  NADF, An X.500 Naming Scheme for National DIT Subtrees and its
        Application for c=CA and c=US", Standing Document 5 (SD-5).

   [9]  Garcia-Luna, J., Knopper, M., Lang, R., Schoffstall, M.,
        Schraeder, W., Weider, C., Yeong, W, Anderson, C., (ed.), and J.
        Postel (ed.), "Research in Directory Services: Fielding
        Operational X.500 (FOX)", Fox Project Final Report, January
        1992.













Postel & Anderson                                              [Page 29]
^L
RFC 1588                   White Pages Report              February 1994


9. GLOSSARY

      API - Application Program Interface
      COTS - commercial off the shelf
      CSO - a phonebook service developed by University of Illinois
      DAP - Direct Access Protocol
      DIT - Directory Information Tree
      DNS - Domain Name System
      DUI - Directory User Interface
      DUA - Directory User Agent
      DSA - Directory Service Agent
      FOX - Fielding Operational X.500 project
      FRICC - Federal Research Internet Coordinating Committee
      IETF - Internet Engineering Task Force
      ISODE - ISO Development Environment
      LDAP - Lightweight Direct Access Protocol
      NADF - North American Directory Forum
      PEM - Privacy Enhanced Mail
      PSI - Performance Systems International
      SQL - Structured Query Language
      QUIPU - an X.500 DSA which is a component of the ISODE package
      UFN - User Friendly Name
      URI - Uniform Resource Identifier
      URL - Uniform Resource Locator
      WAIS - Wide Area Information Server
      WPS - White Pages Service
      WWW - World Wide Web
























Postel & Anderson                                              [Page 30]
^L
RFC 1588                   White Pages Report              February 1994


9.  ACKNOWLEDGMENTS

   This report is assembled from the words of the following participants
   in the email discussion and the meeting.  The authors are responsible
   for selecting and combining the material.  Credit for all the good
   ideas goes to the participants.  Any bad ideas are the responsibility
   of the authors.


      Allan Cargille                  University of Wisconsin
      Steve Crocker                   TIS
      Peter Deutsch                   BUNYIP
      Peter Ford                      LANL
      Jim Galvin                      TIS
      Joan Gargano                    UC Davis
      Arlene Getchell                 ES.NET
      Rick Huber                      INTERNIC - AT&T
      Christian Huitema               INRIA
      Erik Huizer                     SURFNET
      Tim Howes                       University of Michigan
      Steve Kent                      BBN
      Steve Kille                     ISODE Consortium
      Mark Kosters                    INTERNIC - Network Solutions
      Paul Mockapetris                ARPA
      Paul-Andre Pays                 INRIA
      Dave Piscitello                 BELLCORE
      Marshall Rose                   Dover Beach Consulting
      Sri Sataluri                    INTERNIC - AT&T
      Mike Schwartz                   University of Colorado
      David Staudt                    NSF
      Einar Stefferud                 NMA
      Chris Weider                    MERIT
      Scott Williamson                INTERNIC - Network Solutions
      Russ Wright                     LBL
      Peter Yee                       NASA

10.  SECURITY CONSIDERATIONS

   While there are comments in this memo about privacy and security,
   there is no serious analysis of security considerations for a white
   pages or directory service in this memo.










Postel & Anderson                                              [Page 31]
^L
RFC 1588                   White Pages Report              February 1994


11.  AUTHORS' ADDRESSES

   Jon Postel
   USC/Information Sciences Institute
   4676 Admiralty Way
   Marina del Rey, CA 90292

   Phone: 310-822-1511
   Fax:   310-823-6714
   EMail: Postel@ISI.EDU


   Celeste Anderson
   USC/Information Sciences Institute
   4676 Admiralty Way
   Marina del Rey, CA 90292

   Phone: 310-822-1511
   Fax:   310-823-6714
   EMail: Celeste@ISI.EDU































Postel & Anderson                                              [Page 32]
^L
RFC 1588                   White Pages Report              February 1994


APPENDIX 1

   The following White Pages Functionality List was developed by Chris
   Weider and amended by participants in the current discussion of an
   Internet white pages service.

   Functionality list for a White Pages / Directory services

   Serving information on People only

   1.1 Protocol Requirements

      a) Distributability
      b) Security
      c) Searchability and easy navigation
      d) Reliability (in particular, replication)
      e) Ability to serve the information desired (in particular,
         multi-media information)
      f) Obvious benefits to encourage installation
      g) Protocol support for maintenance of data and 'knowledge'
      h) Ability to support machine use of the data
      i) Must be based on Open Standards and respond rapidly to correct
         deficiencies
      j) Serve new types of information (not initially planned) only
         only upon request
      k) Allow different operation modes

   1.2 Implementation Requirements

      a) Searchability and easy navigation
      b) An obvious and fairly painless upgrade path for organizations
      c) Obvious benefits to encourage installation
      d) Ubiquitous clients
      e) Clients that can do exhaustive search and/or cache useful
         information and use heuristics to narrow the search space in
         case of ill-formed queries
      f) Ability to support machine use of the data
      g) Stable APIs

   1.3 Sociological Requirements

      a) Shallow learning curve for novice users (both client and
         server)
      b) Public domain servers and clients to encourage experimentation
      c) Easy techniques for maintaining data, to encourage users to
         keep their data up-to-date
      d) (particularly for organizations) The ability to hide an
         organization's internal structure while making the data public.



Postel & Anderson                                              [Page 33]
^L
RFC 1588                   White Pages Report              February 1994


      e) Widely recognized authorities to guarantee unique naming during
         registrations (This is specifically X.500 centric)
      f) The ability to support the privacy / legal requirements of all
         participants while still being able to achieve good coverage.
      g) Supportable infrastructure (Perhaps an identification of what
         infrastructure support requires and how that will be
         maintained)

   Although the original focus of this discussion was on White Pages,
   many participants believe that a Yellow Pages service should be built
   into a White Pages scheme.

   Functionality List for Yellow Pages service

   Yellow pages services, with data primarily on people

   2.1 Protocol Requirements

      a) all listed in 1.1
      b) Very good searching, perhaps with semantic support OR
      b2) Protocol support for easy selection of proper keywords to
         allow searching
      c) Ways to easily update and maintain the information required by
         the Yellow Pages services
      d) Ability to set up specific servers for specific applications or
         a family of applications while still working with the WP
         information bases

   2.2 Implementation Requirements

      a) All listed in 1.2
      b) Server or client support for relevance feedback

   2.3 Sociological Requirements

      a) all listed in 1.3

   Advanced directory services for resource location (not just people
   data)

   3.1 Protocol Requirements

      a) All listed in 2.1
      b) Ability to track very rapidly changing data
      c) Extremely good and rapid search techniques






Postel & Anderson                                              [Page 34]
^L
RFC 1588                   White Pages Report              February 1994


   3.2 Implementation Requirements

      a) All listed in 2.2
      b) Ability to integrate well with retrieval systems
      c) Speed, Speed, Speed

   3.3 Sociological Requirements

      a) All listed in 1.3
      b) Protocol support for 'explain' functions: 'Why didn't this
         query work?'








































Postel & Anderson                                              [Page 35]
^L